Mar 18 08:46:24.879301 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 18 08:46:25.533456 master-0 kubenswrapper[4031]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 08:46:25.533456 master-0 kubenswrapper[4031]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 18 08:46:25.533456 master-0 kubenswrapper[4031]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 08:46:25.533456 master-0 kubenswrapper[4031]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 08:46:25.533456 master-0 kubenswrapper[4031]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 18 08:46:25.533456 master-0 kubenswrapper[4031]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 08:46:25.537318 master-0 kubenswrapper[4031]: I0318 08:46:25.536462 4031 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 18 08:46:25.542813 master-0 kubenswrapper[4031]: W0318 08:46:25.542765 4031 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 08:46:25.542813 master-0 kubenswrapper[4031]: W0318 08:46:25.542798 4031 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 08:46:25.542813 master-0 kubenswrapper[4031]: W0318 08:46:25.542810 4031 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 08:46:25.542813 master-0 kubenswrapper[4031]: W0318 08:46:25.542822 4031 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 08:46:25.543051 master-0 kubenswrapper[4031]: W0318 08:46:25.542833 4031 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 08:46:25.543051 master-0 kubenswrapper[4031]: W0318 08:46:25.542843 4031 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 08:46:25.543051 master-0 kubenswrapper[4031]: W0318 08:46:25.542851 4031 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 08:46:25.543051 master-0 kubenswrapper[4031]: W0318 08:46:25.542859 4031 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 08:46:25.543051 master-0 kubenswrapper[4031]: W0318 08:46:25.542867 4031 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 08:46:25.543051 master-0 kubenswrapper[4031]: W0318 08:46:25.542876 4031 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 08:46:25.543051 master-0 kubenswrapper[4031]: W0318 08:46:25.542886 4031 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 08:46:25.543051 master-0 kubenswrapper[4031]: W0318 08:46:25.542895 4031 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 08:46:25.543051 master-0 kubenswrapper[4031]: W0318 08:46:25.542903 4031 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 08:46:25.543051 master-0 kubenswrapper[4031]: W0318 08:46:25.542912 4031 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 08:46:25.543051 master-0 kubenswrapper[4031]: W0318 08:46:25.542920 4031 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 08:46:25.543051 master-0 kubenswrapper[4031]: W0318 08:46:25.542928 4031 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 08:46:25.543051 master-0 kubenswrapper[4031]: W0318 08:46:25.542935 4031 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 08:46:25.543051 master-0 kubenswrapper[4031]: W0318 08:46:25.542943 4031 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 08:46:25.543051 master-0 kubenswrapper[4031]: W0318 08:46:25.542951 4031 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 08:46:25.543051 master-0 kubenswrapper[4031]: W0318 08:46:25.542959 4031 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 08:46:25.543051 master-0 kubenswrapper[4031]: W0318 08:46:25.542967 4031 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 08:46:25.543051 master-0 kubenswrapper[4031]: W0318 08:46:25.542975 4031 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 08:46:25.543051 master-0 kubenswrapper[4031]: W0318 08:46:25.542988 4031 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 08:46:25.544006 master-0 kubenswrapper[4031]: W0318 08:46:25.542996 4031 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 08:46:25.544006 master-0 kubenswrapper[4031]: W0318 08:46:25.543004 4031 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 08:46:25.544006 master-0 kubenswrapper[4031]: W0318 08:46:25.543013 4031 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 08:46:25.544006 master-0 kubenswrapper[4031]: W0318 08:46:25.543023 4031 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 08:46:25.544006 master-0 kubenswrapper[4031]: W0318 08:46:25.543033 4031 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 08:46:25.544006 master-0 kubenswrapper[4031]: W0318 08:46:25.543041 4031 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 08:46:25.544006 master-0 kubenswrapper[4031]: W0318 08:46:25.543050 4031 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 08:46:25.544006 master-0 kubenswrapper[4031]: W0318 08:46:25.543059 4031 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 08:46:25.544006 master-0 kubenswrapper[4031]: W0318 08:46:25.543068 4031 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 08:46:25.544006 master-0 kubenswrapper[4031]: W0318 08:46:25.543077 4031 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 08:46:25.544006 master-0 kubenswrapper[4031]: W0318 08:46:25.543085 4031 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 08:46:25.544006 master-0 kubenswrapper[4031]: W0318 08:46:25.543095 4031 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 08:46:25.544006 master-0 kubenswrapper[4031]: W0318 08:46:25.543105 4031 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 08:46:25.544006 master-0 kubenswrapper[4031]: W0318 08:46:25.543113 4031 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 08:46:25.544006 master-0 kubenswrapper[4031]: W0318 08:46:25.543122 4031 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 08:46:25.544006 master-0 kubenswrapper[4031]: W0318 08:46:25.543130 4031 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 08:46:25.544006 master-0 kubenswrapper[4031]: W0318 08:46:25.543138 4031 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 08:46:25.544006 master-0 kubenswrapper[4031]: W0318 08:46:25.543148 4031 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 08:46:25.544006 master-0 kubenswrapper[4031]: W0318 08:46:25.543157 4031 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 08:46:25.544006 master-0 kubenswrapper[4031]: W0318 08:46:25.543165 4031 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 08:46:25.545012 master-0 kubenswrapper[4031]: W0318 08:46:25.543174 4031 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 08:46:25.545012 master-0 kubenswrapper[4031]: W0318 08:46:25.543182 4031 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 08:46:25.545012 master-0 kubenswrapper[4031]: W0318 08:46:25.543191 4031 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 08:46:25.545012 master-0 kubenswrapper[4031]: W0318 08:46:25.543199 4031 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 08:46:25.545012 master-0 kubenswrapper[4031]: W0318 08:46:25.543207 4031 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 08:46:25.545012 master-0 kubenswrapper[4031]: W0318 08:46:25.543214 4031 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 08:46:25.545012 master-0 kubenswrapper[4031]: W0318 08:46:25.543222 4031 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 08:46:25.545012 master-0 kubenswrapper[4031]: W0318 08:46:25.543229 4031 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 08:46:25.545012 master-0 kubenswrapper[4031]: W0318 08:46:25.543238 4031 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 08:46:25.545012 master-0 kubenswrapper[4031]: W0318 08:46:25.543246 4031 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 08:46:25.545012 master-0 kubenswrapper[4031]: W0318 08:46:25.543254 4031 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 08:46:25.545012 master-0 kubenswrapper[4031]: W0318 08:46:25.543261 4031 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 08:46:25.545012 master-0 kubenswrapper[4031]: W0318 08:46:25.543269 4031 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 08:46:25.545012 master-0 kubenswrapper[4031]: W0318 08:46:25.543278 4031 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 08:46:25.545012 master-0 kubenswrapper[4031]: W0318 08:46:25.543285 4031 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 08:46:25.545012 master-0 kubenswrapper[4031]: W0318 08:46:25.543293 4031 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 08:46:25.545012 master-0 kubenswrapper[4031]: W0318 08:46:25.543300 4031 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 08:46:25.545012 master-0 kubenswrapper[4031]: W0318 08:46:25.543308 4031 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 08:46:25.545012 master-0 kubenswrapper[4031]: W0318 08:46:25.543317 4031 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 08:46:25.545012 master-0 kubenswrapper[4031]: W0318 08:46:25.543325 4031 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 08:46:25.546109 master-0 kubenswrapper[4031]: W0318 08:46:25.543332 4031 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 08:46:25.546109 master-0 kubenswrapper[4031]: W0318 08:46:25.543342 4031 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 08:46:25.546109 master-0 kubenswrapper[4031]: W0318 08:46:25.543351 4031 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 08:46:25.546109 master-0 kubenswrapper[4031]: W0318 08:46:25.543362 4031 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 08:46:25.546109 master-0 kubenswrapper[4031]: W0318 08:46:25.543372 4031 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 08:46:25.546109 master-0 kubenswrapper[4031]: W0318 08:46:25.543381 4031 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 08:46:25.546109 master-0 kubenswrapper[4031]: W0318 08:46:25.543388 4031 feature_gate.go:330] unrecognized feature gate: Example Mar 18 08:46:25.546109 master-0 kubenswrapper[4031]: W0318 08:46:25.543397 4031 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 08:46:25.546109 master-0 kubenswrapper[4031]: W0318 08:46:25.543405 4031 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 08:46:25.546109 master-0 kubenswrapper[4031]: I0318 08:46:25.544404 4031 flags.go:64] FLAG: --address="0.0.0.0" Mar 18 08:46:25.546109 master-0 kubenswrapper[4031]: I0318 08:46:25.544469 4031 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 18 08:46:25.546109 master-0 kubenswrapper[4031]: I0318 08:46:25.544487 4031 flags.go:64] FLAG: --anonymous-auth="true" Mar 18 08:46:25.546109 master-0 kubenswrapper[4031]: I0318 08:46:25.544498 4031 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 18 08:46:25.546109 master-0 kubenswrapper[4031]: I0318 08:46:25.544510 4031 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 18 08:46:25.546109 master-0 kubenswrapper[4031]: I0318 08:46:25.544520 4031 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 18 08:46:25.546109 master-0 kubenswrapper[4031]: I0318 08:46:25.544535 4031 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 18 08:46:25.546109 master-0 kubenswrapper[4031]: I0318 08:46:25.544554 4031 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 18 08:46:25.546109 master-0 kubenswrapper[4031]: I0318 08:46:25.544595 4031 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 18 08:46:25.546109 master-0 kubenswrapper[4031]: I0318 08:46:25.544606 4031 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 18 08:46:25.546109 master-0 kubenswrapper[4031]: I0318 08:46:25.544616 4031 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 18 08:46:25.546109 master-0 kubenswrapper[4031]: I0318 08:46:25.544625 4031 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 18 08:46:25.546109 master-0 kubenswrapper[4031]: I0318 08:46:25.544635 4031 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544644 4031 flags.go:64] FLAG: --cgroup-root="" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544652 4031 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544662 4031 flags.go:64] FLAG: --client-ca-file="" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544674 4031 flags.go:64] FLAG: --cloud-config="" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544683 4031 flags.go:64] FLAG: --cloud-provider="" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544691 4031 flags.go:64] FLAG: --cluster-dns="[]" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544702 4031 flags.go:64] FLAG: --cluster-domain="" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544711 4031 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544721 4031 flags.go:64] FLAG: --config-dir="" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544729 4031 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544738 4031 flags.go:64] FLAG: --container-log-max-files="5" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544750 4031 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544759 4031 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544769 4031 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544778 4031 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544787 4031 flags.go:64] FLAG: --contention-profiling="false" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544796 4031 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544805 4031 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544815 4031 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544824 4031 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544834 4031 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544843 4031 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544852 4031 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544860 4031 flags.go:64] FLAG: --enable-load-reader="false" Mar 18 08:46:25.547133 master-0 kubenswrapper[4031]: I0318 08:46:25.544870 4031 flags.go:64] FLAG: --enable-server="true" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.544879 4031 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.544891 4031 flags.go:64] FLAG: --event-burst="100" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.544901 4031 flags.go:64] FLAG: --event-qps="50" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.544909 4031 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.544920 4031 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.544929 4031 flags.go:64] FLAG: --eviction-hard="" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.544940 4031 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.544949 4031 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.544957 4031 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.544967 4031 flags.go:64] FLAG: --eviction-soft="" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.544977 4031 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.544986 4031 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.544995 4031 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.545003 4031 flags.go:64] FLAG: --experimental-mounter-path="" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.545012 4031 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.545021 4031 flags.go:64] FLAG: --fail-swap-on="true" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.545031 4031 flags.go:64] FLAG: --feature-gates="" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.545042 4031 flags.go:64] FLAG: --file-check-frequency="20s" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.545051 4031 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.545061 4031 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.545071 4031 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.545080 4031 flags.go:64] FLAG: --healthz-port="10248" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.545089 4031 flags.go:64] FLAG: --help="false" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.545098 4031 flags.go:64] FLAG: --hostname-override="" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.545108 4031 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 18 08:46:25.548363 master-0 kubenswrapper[4031]: I0318 08:46:25.545118 4031 flags.go:64] FLAG: --http-check-frequency="20s" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545126 4031 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545135 4031 flags.go:64] FLAG: --image-credential-provider-config="" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545144 4031 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545153 4031 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545162 4031 flags.go:64] FLAG: --image-service-endpoint="" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545170 4031 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545179 4031 flags.go:64] FLAG: --kube-api-burst="100" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545188 4031 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545198 4031 flags.go:64] FLAG: --kube-api-qps="50" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545207 4031 flags.go:64] FLAG: --kube-reserved="" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545215 4031 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545224 4031 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545234 4031 flags.go:64] FLAG: --kubelet-cgroups="" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545243 4031 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545254 4031 flags.go:64] FLAG: --lock-file="" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545262 4031 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545273 4031 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545282 4031 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545305 4031 flags.go:64] FLAG: --log-json-split-stream="false" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545315 4031 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545324 4031 flags.go:64] FLAG: --log-text-split-stream="false" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545333 4031 flags.go:64] FLAG: --logging-format="text" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545342 4031 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545351 4031 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 18 08:46:25.549653 master-0 kubenswrapper[4031]: I0318 08:46:25.545360 4031 flags.go:64] FLAG: --manifest-url="" Mar 18 08:46:25.550744 master-0 kubenswrapper[4031]: I0318 08:46:25.545369 4031 flags.go:64] FLAG: --manifest-url-header="" Mar 18 08:46:25.550744 master-0 kubenswrapper[4031]: I0318 08:46:25.545381 4031 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 18 08:46:25.550744 master-0 kubenswrapper[4031]: I0318 08:46:25.545390 4031 flags.go:64] FLAG: --max-open-files="1000000" Mar 18 08:46:25.550744 master-0 kubenswrapper[4031]: I0318 08:46:25.545401 4031 flags.go:64] FLAG: --max-pods="110" Mar 18 08:46:25.550744 master-0 kubenswrapper[4031]: I0318 08:46:25.545410 4031 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 18 08:46:25.550744 master-0 kubenswrapper[4031]: I0318 08:46:25.545420 4031 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 18 08:46:25.550744 master-0 kubenswrapper[4031]: I0318 08:46:25.545429 4031 flags.go:64] FLAG: --memory-manager-policy="None" Mar 18 08:46:25.550744 master-0 kubenswrapper[4031]: I0318 08:46:25.545438 4031 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 18 08:46:25.550744 master-0 kubenswrapper[4031]: I0318 08:46:25.545447 4031 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 18 08:46:25.550744 master-0 kubenswrapper[4031]: I0318 08:46:25.545456 4031 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 18 08:46:25.550744 master-0 kubenswrapper[4031]: I0318 08:46:25.545465 4031 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 18 08:46:25.550744 master-0 kubenswrapper[4031]: I0318 08:46:25.545484 4031 flags.go:64] FLAG: --node-status-max-images="50" Mar 18 08:46:25.550744 master-0 kubenswrapper[4031]: I0318 08:46:25.545493 4031 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 18 08:46:25.550744 master-0 kubenswrapper[4031]: I0318 08:46:25.545502 4031 flags.go:64] FLAG: --oom-score-adj="-999" Mar 18 08:46:25.550744 master-0 kubenswrapper[4031]: I0318 08:46:25.545512 4031 flags.go:64] FLAG: --pod-cidr="" Mar 18 08:46:25.550744 master-0 kubenswrapper[4031]: I0318 08:46:25.545520 4031 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 18 08:46:25.550744 master-0 kubenswrapper[4031]: I0318 08:46:25.545533 4031 flags.go:64] FLAG: --pod-manifest-path="" Mar 18 08:46:25.550744 master-0 kubenswrapper[4031]: I0318 08:46:25.545542 4031 flags.go:64] FLAG: --pod-max-pids="-1" Mar 18 08:46:25.550744 master-0 kubenswrapper[4031]: I0318 08:46:25.545552 4031 flags.go:64] FLAG: --pods-per-core="0" Mar 18 08:46:25.550744 master-0 kubenswrapper[4031]: I0318 08:46:25.545560 4031 flags.go:64] FLAG: --port="10250" Mar 18 08:46:25.550744 master-0 kubenswrapper[4031]: I0318 08:46:25.545596 4031 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 18 08:46:25.550744 master-0 kubenswrapper[4031]: I0318 08:46:25.545606 4031 flags.go:64] FLAG: --provider-id="" Mar 18 08:46:25.550744 master-0 kubenswrapper[4031]: I0318 08:46:25.545615 4031 flags.go:64] FLAG: --qos-reserved="" Mar 18 08:46:25.550744 master-0 kubenswrapper[4031]: I0318 08:46:25.545624 4031 flags.go:64] FLAG: --read-only-port="10255" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545634 4031 flags.go:64] FLAG: --register-node="true" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545646 4031 flags.go:64] FLAG: --register-schedulable="true" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545656 4031 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545670 4031 flags.go:64] FLAG: --registry-burst="10" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545679 4031 flags.go:64] FLAG: --registry-qps="5" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545688 4031 flags.go:64] FLAG: --reserved-cpus="" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545697 4031 flags.go:64] FLAG: --reserved-memory="" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545709 4031 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545718 4031 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545727 4031 flags.go:64] FLAG: --rotate-certificates="false" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545736 4031 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545745 4031 flags.go:64] FLAG: --runonce="false" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545754 4031 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545764 4031 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545774 4031 flags.go:64] FLAG: --seccomp-default="false" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545783 4031 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545792 4031 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545802 4031 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545811 4031 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545820 4031 flags.go:64] FLAG: --storage-driver-password="root" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545829 4031 flags.go:64] FLAG: --storage-driver-secure="false" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545838 4031 flags.go:64] FLAG: --storage-driver-table="stats" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545847 4031 flags.go:64] FLAG: --storage-driver-user="root" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545856 4031 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 18 08:46:25.551799 master-0 kubenswrapper[4031]: I0318 08:46:25.545865 4031 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 18 08:46:25.552930 master-0 kubenswrapper[4031]: I0318 08:46:25.545874 4031 flags.go:64] FLAG: --system-cgroups="" Mar 18 08:46:25.552930 master-0 kubenswrapper[4031]: I0318 08:46:25.545882 4031 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 18 08:46:25.552930 master-0 kubenswrapper[4031]: I0318 08:46:25.545897 4031 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 18 08:46:25.552930 master-0 kubenswrapper[4031]: I0318 08:46:25.545906 4031 flags.go:64] FLAG: --tls-cert-file="" Mar 18 08:46:25.552930 master-0 kubenswrapper[4031]: I0318 08:46:25.545914 4031 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 18 08:46:25.552930 master-0 kubenswrapper[4031]: I0318 08:46:25.545925 4031 flags.go:64] FLAG: --tls-min-version="" Mar 18 08:46:25.552930 master-0 kubenswrapper[4031]: I0318 08:46:25.545934 4031 flags.go:64] FLAG: --tls-private-key-file="" Mar 18 08:46:25.552930 master-0 kubenswrapper[4031]: I0318 08:46:25.545943 4031 flags.go:64] FLAG: --topology-manager-policy="none" Mar 18 08:46:25.552930 master-0 kubenswrapper[4031]: I0318 08:46:25.545953 4031 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 18 08:46:25.552930 master-0 kubenswrapper[4031]: I0318 08:46:25.545961 4031 flags.go:64] FLAG: --topology-manager-scope="container" Mar 18 08:46:25.552930 master-0 kubenswrapper[4031]: I0318 08:46:25.545970 4031 flags.go:64] FLAG: --v="2" Mar 18 08:46:25.552930 master-0 kubenswrapper[4031]: I0318 08:46:25.545982 4031 flags.go:64] FLAG: --version="false" Mar 18 08:46:25.552930 master-0 kubenswrapper[4031]: I0318 08:46:25.545995 4031 flags.go:64] FLAG: --vmodule="" Mar 18 08:46:25.552930 master-0 kubenswrapper[4031]: I0318 08:46:25.546006 4031 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 18 08:46:25.552930 master-0 kubenswrapper[4031]: I0318 08:46:25.546016 4031 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 18 08:46:25.552930 master-0 kubenswrapper[4031]: W0318 08:46:25.546220 4031 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 08:46:25.552930 master-0 kubenswrapper[4031]: W0318 08:46:25.546235 4031 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 08:46:25.552930 master-0 kubenswrapper[4031]: W0318 08:46:25.546244 4031 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 08:46:25.552930 master-0 kubenswrapper[4031]: W0318 08:46:25.546253 4031 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 08:46:25.552930 master-0 kubenswrapper[4031]: W0318 08:46:25.546263 4031 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 08:46:25.552930 master-0 kubenswrapper[4031]: W0318 08:46:25.546272 4031 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 08:46:25.552930 master-0 kubenswrapper[4031]: W0318 08:46:25.546279 4031 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 08:46:25.552930 master-0 kubenswrapper[4031]: W0318 08:46:25.546287 4031 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 08:46:25.554028 master-0 kubenswrapper[4031]: W0318 08:46:25.546295 4031 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 08:46:25.554028 master-0 kubenswrapper[4031]: W0318 08:46:25.546303 4031 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 08:46:25.554028 master-0 kubenswrapper[4031]: W0318 08:46:25.546311 4031 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 08:46:25.554028 master-0 kubenswrapper[4031]: W0318 08:46:25.546318 4031 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 08:46:25.554028 master-0 kubenswrapper[4031]: W0318 08:46:25.546326 4031 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 08:46:25.554028 master-0 kubenswrapper[4031]: W0318 08:46:25.546333 4031 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 08:46:25.554028 master-0 kubenswrapper[4031]: W0318 08:46:25.546341 4031 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 08:46:25.554028 master-0 kubenswrapper[4031]: W0318 08:46:25.546349 4031 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 08:46:25.554028 master-0 kubenswrapper[4031]: W0318 08:46:25.546357 4031 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 08:46:25.554028 master-0 kubenswrapper[4031]: W0318 08:46:25.546364 4031 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 08:46:25.554028 master-0 kubenswrapper[4031]: W0318 08:46:25.546372 4031 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 08:46:25.554028 master-0 kubenswrapper[4031]: W0318 08:46:25.546380 4031 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 08:46:25.554028 master-0 kubenswrapper[4031]: W0318 08:46:25.546387 4031 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 08:46:25.554028 master-0 kubenswrapper[4031]: W0318 08:46:25.546395 4031 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 08:46:25.554028 master-0 kubenswrapper[4031]: W0318 08:46:25.546402 4031 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 08:46:25.554028 master-0 kubenswrapper[4031]: W0318 08:46:25.547367 4031 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 08:46:25.554028 master-0 kubenswrapper[4031]: W0318 08:46:25.547388 4031 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 08:46:25.554028 master-0 kubenswrapper[4031]: W0318 08:46:25.547397 4031 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 08:46:25.554028 master-0 kubenswrapper[4031]: W0318 08:46:25.547408 4031 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 08:46:25.554955 master-0 kubenswrapper[4031]: W0318 08:46:25.547418 4031 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 08:46:25.554955 master-0 kubenswrapper[4031]: W0318 08:46:25.547429 4031 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 08:46:25.554955 master-0 kubenswrapper[4031]: W0318 08:46:25.547440 4031 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 08:46:25.554955 master-0 kubenswrapper[4031]: W0318 08:46:25.547449 4031 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 08:46:25.554955 master-0 kubenswrapper[4031]: W0318 08:46:25.547457 4031 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 08:46:25.554955 master-0 kubenswrapper[4031]: W0318 08:46:25.547465 4031 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 08:46:25.554955 master-0 kubenswrapper[4031]: W0318 08:46:25.547476 4031 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 08:46:25.554955 master-0 kubenswrapper[4031]: W0318 08:46:25.547487 4031 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 08:46:25.554955 master-0 kubenswrapper[4031]: W0318 08:46:25.547497 4031 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 08:46:25.554955 master-0 kubenswrapper[4031]: W0318 08:46:25.547506 4031 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 08:46:25.554955 master-0 kubenswrapper[4031]: W0318 08:46:25.547516 4031 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 08:46:25.554955 master-0 kubenswrapper[4031]: W0318 08:46:25.547525 4031 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 08:46:25.554955 master-0 kubenswrapper[4031]: W0318 08:46:25.547535 4031 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 08:46:25.554955 master-0 kubenswrapper[4031]: W0318 08:46:25.547544 4031 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 08:46:25.554955 master-0 kubenswrapper[4031]: W0318 08:46:25.547554 4031 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 08:46:25.554955 master-0 kubenswrapper[4031]: W0318 08:46:25.547590 4031 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 08:46:25.554955 master-0 kubenswrapper[4031]: W0318 08:46:25.547600 4031 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 08:46:25.554955 master-0 kubenswrapper[4031]: W0318 08:46:25.547608 4031 feature_gate.go:330] unrecognized feature gate: Example Mar 18 08:46:25.555986 master-0 kubenswrapper[4031]: W0318 08:46:25.547617 4031 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 08:46:25.555986 master-0 kubenswrapper[4031]: W0318 08:46:25.547625 4031 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 08:46:25.555986 master-0 kubenswrapper[4031]: W0318 08:46:25.547634 4031 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 08:46:25.555986 master-0 kubenswrapper[4031]: W0318 08:46:25.547642 4031 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 08:46:25.555986 master-0 kubenswrapper[4031]: W0318 08:46:25.547650 4031 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 08:46:25.555986 master-0 kubenswrapper[4031]: W0318 08:46:25.547659 4031 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 08:46:25.555986 master-0 kubenswrapper[4031]: W0318 08:46:25.547667 4031 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 08:46:25.555986 master-0 kubenswrapper[4031]: W0318 08:46:25.547675 4031 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 08:46:25.555986 master-0 kubenswrapper[4031]: W0318 08:46:25.547682 4031 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 08:46:25.555986 master-0 kubenswrapper[4031]: W0318 08:46:25.547690 4031 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 08:46:25.555986 master-0 kubenswrapper[4031]: W0318 08:46:25.547698 4031 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 08:46:25.555986 master-0 kubenswrapper[4031]: W0318 08:46:25.547707 4031 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 08:46:25.555986 master-0 kubenswrapper[4031]: W0318 08:46:25.547723 4031 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 08:46:25.555986 master-0 kubenswrapper[4031]: W0318 08:46:25.547731 4031 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 08:46:25.555986 master-0 kubenswrapper[4031]: W0318 08:46:25.547738 4031 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 08:46:25.555986 master-0 kubenswrapper[4031]: W0318 08:46:25.547746 4031 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 08:46:25.555986 master-0 kubenswrapper[4031]: W0318 08:46:25.547754 4031 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 08:46:25.555986 master-0 kubenswrapper[4031]: W0318 08:46:25.547762 4031 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 08:46:25.555986 master-0 kubenswrapper[4031]: W0318 08:46:25.547770 4031 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 08:46:25.555986 master-0 kubenswrapper[4031]: W0318 08:46:25.547777 4031 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 08:46:25.557087 master-0 kubenswrapper[4031]: W0318 08:46:25.547785 4031 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 08:46:25.557087 master-0 kubenswrapper[4031]: W0318 08:46:25.547793 4031 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 08:46:25.557087 master-0 kubenswrapper[4031]: W0318 08:46:25.547801 4031 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 08:46:25.557087 master-0 kubenswrapper[4031]: W0318 08:46:25.547808 4031 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 08:46:25.557087 master-0 kubenswrapper[4031]: W0318 08:46:25.547817 4031 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 08:46:25.557087 master-0 kubenswrapper[4031]: W0318 08:46:25.547825 4031 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 08:46:25.557087 master-0 kubenswrapper[4031]: W0318 08:46:25.547833 4031 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 08:46:25.557087 master-0 kubenswrapper[4031]: I0318 08:46:25.547854 4031 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 08:46:25.562097 master-0 kubenswrapper[4031]: I0318 08:46:25.562023 4031 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 18 08:46:25.562097 master-0 kubenswrapper[4031]: I0318 08:46:25.562087 4031 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 18 08:46:25.562446 master-0 kubenswrapper[4031]: W0318 08:46:25.562231 4031 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 08:46:25.562446 master-0 kubenswrapper[4031]: W0318 08:46:25.562245 4031 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 08:46:25.562446 master-0 kubenswrapper[4031]: W0318 08:46:25.562256 4031 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 08:46:25.562446 master-0 kubenswrapper[4031]: W0318 08:46:25.562265 4031 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 08:46:25.562446 master-0 kubenswrapper[4031]: W0318 08:46:25.562273 4031 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 08:46:25.562446 master-0 kubenswrapper[4031]: W0318 08:46:25.562281 4031 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 08:46:25.562446 master-0 kubenswrapper[4031]: W0318 08:46:25.562289 4031 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 08:46:25.562446 master-0 kubenswrapper[4031]: W0318 08:46:25.562297 4031 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 08:46:25.562446 master-0 kubenswrapper[4031]: W0318 08:46:25.562305 4031 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 08:46:25.562446 master-0 kubenswrapper[4031]: W0318 08:46:25.562315 4031 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 08:46:25.562446 master-0 kubenswrapper[4031]: W0318 08:46:25.562325 4031 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 08:46:25.562446 master-0 kubenswrapper[4031]: W0318 08:46:25.562336 4031 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 08:46:25.562446 master-0 kubenswrapper[4031]: W0318 08:46:25.562348 4031 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 08:46:25.562446 master-0 kubenswrapper[4031]: W0318 08:46:25.562358 4031 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 08:46:25.562446 master-0 kubenswrapper[4031]: W0318 08:46:25.562368 4031 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 08:46:25.562446 master-0 kubenswrapper[4031]: W0318 08:46:25.562377 4031 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 08:46:25.562446 master-0 kubenswrapper[4031]: W0318 08:46:25.562387 4031 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 08:46:25.562446 master-0 kubenswrapper[4031]: W0318 08:46:25.562396 4031 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 08:46:25.562446 master-0 kubenswrapper[4031]: W0318 08:46:25.562405 4031 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 08:46:25.562446 master-0 kubenswrapper[4031]: W0318 08:46:25.562413 4031 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 08:46:25.563342 master-0 kubenswrapper[4031]: W0318 08:46:25.562423 4031 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 08:46:25.563342 master-0 kubenswrapper[4031]: W0318 08:46:25.562431 4031 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 08:46:25.563342 master-0 kubenswrapper[4031]: W0318 08:46:25.562439 4031 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 08:46:25.563342 master-0 kubenswrapper[4031]: W0318 08:46:25.562447 4031 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 08:46:25.563342 master-0 kubenswrapper[4031]: W0318 08:46:25.562457 4031 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 08:46:25.563342 master-0 kubenswrapper[4031]: W0318 08:46:25.562467 4031 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 08:46:25.563342 master-0 kubenswrapper[4031]: W0318 08:46:25.562476 4031 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 08:46:25.563342 master-0 kubenswrapper[4031]: W0318 08:46:25.562484 4031 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 08:46:25.563342 master-0 kubenswrapper[4031]: W0318 08:46:25.562493 4031 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 08:46:25.563342 master-0 kubenswrapper[4031]: W0318 08:46:25.562502 4031 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 08:46:25.563342 master-0 kubenswrapper[4031]: W0318 08:46:25.562510 4031 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 08:46:25.563342 master-0 kubenswrapper[4031]: W0318 08:46:25.562518 4031 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 08:46:25.563342 master-0 kubenswrapper[4031]: W0318 08:46:25.562526 4031 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 08:46:25.563342 master-0 kubenswrapper[4031]: W0318 08:46:25.562533 4031 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 08:46:25.563342 master-0 kubenswrapper[4031]: W0318 08:46:25.562544 4031 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 08:46:25.563342 master-0 kubenswrapper[4031]: W0318 08:46:25.562552 4031 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 08:46:25.563342 master-0 kubenswrapper[4031]: W0318 08:46:25.562560 4031 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 08:46:25.563342 master-0 kubenswrapper[4031]: W0318 08:46:25.562615 4031 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 08:46:25.563342 master-0 kubenswrapper[4031]: W0318 08:46:25.562630 4031 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 08:46:25.563342 master-0 kubenswrapper[4031]: W0318 08:46:25.562640 4031 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 08:46:25.564278 master-0 kubenswrapper[4031]: W0318 08:46:25.562652 4031 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 08:46:25.564278 master-0 kubenswrapper[4031]: W0318 08:46:25.562664 4031 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 08:46:25.564278 master-0 kubenswrapper[4031]: W0318 08:46:25.562675 4031 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 08:46:25.564278 master-0 kubenswrapper[4031]: W0318 08:46:25.562687 4031 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 08:46:25.564278 master-0 kubenswrapper[4031]: W0318 08:46:25.562698 4031 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 08:46:25.564278 master-0 kubenswrapper[4031]: W0318 08:46:25.562708 4031 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 08:46:25.564278 master-0 kubenswrapper[4031]: W0318 08:46:25.562718 4031 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 08:46:25.564278 master-0 kubenswrapper[4031]: W0318 08:46:25.562729 4031 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 08:46:25.564278 master-0 kubenswrapper[4031]: W0318 08:46:25.562743 4031 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 08:46:25.564278 master-0 kubenswrapper[4031]: W0318 08:46:25.562754 4031 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 08:46:25.564278 master-0 kubenswrapper[4031]: W0318 08:46:25.562765 4031 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 08:46:25.564278 master-0 kubenswrapper[4031]: W0318 08:46:25.562777 4031 feature_gate.go:330] unrecognized feature gate: Example Mar 18 08:46:25.564278 master-0 kubenswrapper[4031]: W0318 08:46:25.562789 4031 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 08:46:25.564278 master-0 kubenswrapper[4031]: W0318 08:46:25.562797 4031 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 08:46:25.564278 master-0 kubenswrapper[4031]: W0318 08:46:25.562805 4031 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 08:46:25.564278 master-0 kubenswrapper[4031]: W0318 08:46:25.562812 4031 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 08:46:25.564278 master-0 kubenswrapper[4031]: W0318 08:46:25.562820 4031 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 08:46:25.564278 master-0 kubenswrapper[4031]: W0318 08:46:25.562828 4031 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 08:46:25.564278 master-0 kubenswrapper[4031]: W0318 08:46:25.562839 4031 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 08:46:25.565722 master-0 kubenswrapper[4031]: W0318 08:46:25.562849 4031 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 08:46:25.565722 master-0 kubenswrapper[4031]: W0318 08:46:25.562859 4031 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 08:46:25.565722 master-0 kubenswrapper[4031]: W0318 08:46:25.562867 4031 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 08:46:25.565722 master-0 kubenswrapper[4031]: W0318 08:46:25.562876 4031 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 08:46:25.565722 master-0 kubenswrapper[4031]: W0318 08:46:25.562885 4031 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 08:46:25.565722 master-0 kubenswrapper[4031]: W0318 08:46:25.562893 4031 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 08:46:25.565722 master-0 kubenswrapper[4031]: W0318 08:46:25.562900 4031 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 08:46:25.565722 master-0 kubenswrapper[4031]: W0318 08:46:25.562909 4031 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 08:46:25.565722 master-0 kubenswrapper[4031]: W0318 08:46:25.562916 4031 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 08:46:25.565722 master-0 kubenswrapper[4031]: W0318 08:46:25.562925 4031 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 08:46:25.565722 master-0 kubenswrapper[4031]: W0318 08:46:25.562933 4031 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 08:46:25.565722 master-0 kubenswrapper[4031]: W0318 08:46:25.562943 4031 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 08:46:25.565722 master-0 kubenswrapper[4031]: W0318 08:46:25.562951 4031 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 08:46:25.565722 master-0 kubenswrapper[4031]: I0318 08:46:25.562965 4031 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 08:46:25.565722 master-0 kubenswrapper[4031]: W0318 08:46:25.563230 4031 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 08:46:25.566392 master-0 kubenswrapper[4031]: W0318 08:46:25.563244 4031 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 08:46:25.566392 master-0 kubenswrapper[4031]: W0318 08:46:25.563255 4031 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 08:46:25.566392 master-0 kubenswrapper[4031]: W0318 08:46:25.563265 4031 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 08:46:25.566392 master-0 kubenswrapper[4031]: W0318 08:46:25.563275 4031 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 08:46:25.566392 master-0 kubenswrapper[4031]: W0318 08:46:25.563285 4031 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 08:46:25.566392 master-0 kubenswrapper[4031]: W0318 08:46:25.563295 4031 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 08:46:25.566392 master-0 kubenswrapper[4031]: W0318 08:46:25.563303 4031 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 08:46:25.566392 master-0 kubenswrapper[4031]: W0318 08:46:25.563310 4031 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 08:46:25.566392 master-0 kubenswrapper[4031]: W0318 08:46:25.563318 4031 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 08:46:25.566392 master-0 kubenswrapper[4031]: W0318 08:46:25.563327 4031 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 08:46:25.566392 master-0 kubenswrapper[4031]: W0318 08:46:25.563334 4031 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 08:46:25.566392 master-0 kubenswrapper[4031]: W0318 08:46:25.563342 4031 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 08:46:25.566392 master-0 kubenswrapper[4031]: W0318 08:46:25.563351 4031 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 08:46:25.566392 master-0 kubenswrapper[4031]: W0318 08:46:25.563359 4031 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 08:46:25.566392 master-0 kubenswrapper[4031]: W0318 08:46:25.563368 4031 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 08:46:25.566392 master-0 kubenswrapper[4031]: W0318 08:46:25.563379 4031 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 08:46:25.566392 master-0 kubenswrapper[4031]: W0318 08:46:25.563390 4031 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 08:46:25.566392 master-0 kubenswrapper[4031]: W0318 08:46:25.563399 4031 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 08:46:25.566392 master-0 kubenswrapper[4031]: W0318 08:46:25.563408 4031 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 08:46:25.566392 master-0 kubenswrapper[4031]: W0318 08:46:25.563416 4031 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 08:46:25.567372 master-0 kubenswrapper[4031]: W0318 08:46:25.563424 4031 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 08:46:25.567372 master-0 kubenswrapper[4031]: W0318 08:46:25.563434 4031 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 08:46:25.567372 master-0 kubenswrapper[4031]: W0318 08:46:25.563441 4031 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 08:46:25.567372 master-0 kubenswrapper[4031]: W0318 08:46:25.563449 4031 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 08:46:25.567372 master-0 kubenswrapper[4031]: W0318 08:46:25.563458 4031 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 08:46:25.567372 master-0 kubenswrapper[4031]: W0318 08:46:25.563466 4031 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 08:46:25.567372 master-0 kubenswrapper[4031]: W0318 08:46:25.563473 4031 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 08:46:25.567372 master-0 kubenswrapper[4031]: W0318 08:46:25.563481 4031 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 08:46:25.567372 master-0 kubenswrapper[4031]: W0318 08:46:25.563489 4031 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 08:46:25.567372 master-0 kubenswrapper[4031]: W0318 08:46:25.563496 4031 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 08:46:25.567372 master-0 kubenswrapper[4031]: W0318 08:46:25.563504 4031 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 08:46:25.567372 master-0 kubenswrapper[4031]: W0318 08:46:25.563512 4031 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 08:46:25.567372 master-0 kubenswrapper[4031]: W0318 08:46:25.563521 4031 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 08:46:25.567372 master-0 kubenswrapper[4031]: W0318 08:46:25.563529 4031 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 08:46:25.567372 master-0 kubenswrapper[4031]: W0318 08:46:25.563539 4031 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 08:46:25.567372 master-0 kubenswrapper[4031]: W0318 08:46:25.563549 4031 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 08:46:25.567372 master-0 kubenswrapper[4031]: W0318 08:46:25.563559 4031 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 08:46:25.567372 master-0 kubenswrapper[4031]: W0318 08:46:25.563596 4031 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 08:46:25.567372 master-0 kubenswrapper[4031]: W0318 08:46:25.563605 4031 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 08:46:25.567372 master-0 kubenswrapper[4031]: W0318 08:46:25.563614 4031 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 08:46:25.568314 master-0 kubenswrapper[4031]: W0318 08:46:25.563625 4031 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 08:46:25.568314 master-0 kubenswrapper[4031]: W0318 08:46:25.563637 4031 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 08:46:25.568314 master-0 kubenswrapper[4031]: W0318 08:46:25.563647 4031 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 08:46:25.568314 master-0 kubenswrapper[4031]: W0318 08:46:25.563658 4031 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 08:46:25.568314 master-0 kubenswrapper[4031]: W0318 08:46:25.563668 4031 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 08:46:25.568314 master-0 kubenswrapper[4031]: W0318 08:46:25.563677 4031 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 08:46:25.568314 master-0 kubenswrapper[4031]: W0318 08:46:25.563687 4031 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 08:46:25.568314 master-0 kubenswrapper[4031]: W0318 08:46:25.563697 4031 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 08:46:25.568314 master-0 kubenswrapper[4031]: W0318 08:46:25.563708 4031 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 08:46:25.568314 master-0 kubenswrapper[4031]: W0318 08:46:25.563718 4031 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 08:46:25.568314 master-0 kubenswrapper[4031]: W0318 08:46:25.563728 4031 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 08:46:25.568314 master-0 kubenswrapper[4031]: W0318 08:46:25.563737 4031 feature_gate.go:330] unrecognized feature gate: Example Mar 18 08:46:25.568314 master-0 kubenswrapper[4031]: W0318 08:46:25.563749 4031 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 08:46:25.568314 master-0 kubenswrapper[4031]: W0318 08:46:25.563761 4031 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 08:46:25.568314 master-0 kubenswrapper[4031]: W0318 08:46:25.563774 4031 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 08:46:25.568314 master-0 kubenswrapper[4031]: W0318 08:46:25.563785 4031 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 08:46:25.568314 master-0 kubenswrapper[4031]: W0318 08:46:25.563793 4031 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 08:46:25.568314 master-0 kubenswrapper[4031]: W0318 08:46:25.563802 4031 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 08:46:25.568314 master-0 kubenswrapper[4031]: W0318 08:46:25.563811 4031 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 08:46:25.569362 master-0 kubenswrapper[4031]: W0318 08:46:25.563818 4031 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 08:46:25.569362 master-0 kubenswrapper[4031]: W0318 08:46:25.563829 4031 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 08:46:25.569362 master-0 kubenswrapper[4031]: W0318 08:46:25.563838 4031 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 08:46:25.569362 master-0 kubenswrapper[4031]: W0318 08:46:25.563848 4031 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 08:46:25.569362 master-0 kubenswrapper[4031]: W0318 08:46:25.563858 4031 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 08:46:25.569362 master-0 kubenswrapper[4031]: W0318 08:46:25.563868 4031 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 08:46:25.569362 master-0 kubenswrapper[4031]: W0318 08:46:25.563878 4031 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 08:46:25.569362 master-0 kubenswrapper[4031]: W0318 08:46:25.563887 4031 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 08:46:25.569362 master-0 kubenswrapper[4031]: W0318 08:46:25.563894 4031 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 08:46:25.569362 master-0 kubenswrapper[4031]: W0318 08:46:25.563904 4031 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 08:46:25.569362 master-0 kubenswrapper[4031]: W0318 08:46:25.563912 4031 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 08:46:25.569362 master-0 kubenswrapper[4031]: W0318 08:46:25.563919 4031 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 08:46:25.569362 master-0 kubenswrapper[4031]: I0318 08:46:25.563933 4031 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 08:46:25.569362 master-0 kubenswrapper[4031]: I0318 08:46:25.565116 4031 server.go:940] "Client rotation is on, will bootstrap in background" Mar 18 08:46:25.570119 master-0 kubenswrapper[4031]: I0318 08:46:25.569175 4031 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 18 08:46:25.570820 master-0 kubenswrapper[4031]: I0318 08:46:25.570773 4031 server.go:997] "Starting client certificate rotation" Mar 18 08:46:25.570820 master-0 kubenswrapper[4031]: I0318 08:46:25.570819 4031 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 18 08:46:25.571137 master-0 kubenswrapper[4031]: I0318 08:46:25.571055 4031 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 08:46:25.602900 master-0 kubenswrapper[4031]: I0318 08:46:25.602809 4031 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 08:46:25.605968 master-0 kubenswrapper[4031]: I0318 08:46:25.605881 4031 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 08:46:25.609099 master-0 kubenswrapper[4031]: E0318 08:46:25.609034 4031 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:25.628341 master-0 kubenswrapper[4031]: I0318 08:46:25.628232 4031 log.go:25] "Validated CRI v1 runtime API" Mar 18 08:46:25.634495 master-0 kubenswrapper[4031]: I0318 08:46:25.634436 4031 log.go:25] "Validated CRI v1 image API" Mar 18 08:46:25.637853 master-0 kubenswrapper[4031]: I0318 08:46:25.637793 4031 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 18 08:46:25.647766 master-0 kubenswrapper[4031]: I0318 08:46:25.647706 4031 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 c54ba44d-560c-4408-b24b-989ec8b7c22d:/dev/vda3] Mar 18 08:46:25.647890 master-0 kubenswrapper[4031]: I0318 08:46:25.647753 4031 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Mar 18 08:46:25.676797 master-0 kubenswrapper[4031]: I0318 08:46:25.676188 4031 manager.go:217] Machine: {Timestamp:2026-03-18 08:46:25.673326763 +0000 UTC m=+0.602851833 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:a182270b4b4e4574b525d56213aa67ea SystemUUID:a182270b-4b4e-4574-b525-d56213aa67ea BootID:c890c208-5a3a-4b66-9a9b-e57ae2c6aae9 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:21:a5:eb Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:b3:c6:d8 Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:1a:32:43:41:d1:2f Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 18 08:46:25.676797 master-0 kubenswrapper[4031]: I0318 08:46:25.676692 4031 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 18 08:46:25.677179 master-0 kubenswrapper[4031]: I0318 08:46:25.676948 4031 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 18 08:46:25.677614 master-0 kubenswrapper[4031]: I0318 08:46:25.677529 4031 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 18 08:46:25.678075 master-0 kubenswrapper[4031]: I0318 08:46:25.677992 4031 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 18 08:46:25.678484 master-0 kubenswrapper[4031]: I0318 08:46:25.678054 4031 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 18 08:46:25.680470 master-0 kubenswrapper[4031]: I0318 08:46:25.680403 4031 topology_manager.go:138] "Creating topology manager with none policy" Mar 18 08:46:25.680470 master-0 kubenswrapper[4031]: I0318 08:46:25.680453 4031 container_manager_linux.go:303] "Creating device plugin manager" Mar 18 08:46:25.680470 master-0 kubenswrapper[4031]: I0318 08:46:25.680472 4031 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 08:46:25.680816 master-0 kubenswrapper[4031]: I0318 08:46:25.680522 4031 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 08:46:25.680816 master-0 kubenswrapper[4031]: I0318 08:46:25.680759 4031 state_mem.go:36] "Initialized new in-memory state store" Mar 18 08:46:25.680934 master-0 kubenswrapper[4031]: I0318 08:46:25.680893 4031 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 18 08:46:25.684880 master-0 kubenswrapper[4031]: I0318 08:46:25.684835 4031 kubelet.go:418] "Attempting to sync node with API server" Mar 18 08:46:25.684880 master-0 kubenswrapper[4031]: I0318 08:46:25.684879 4031 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 18 08:46:25.685634 master-0 kubenswrapper[4031]: I0318 08:46:25.684907 4031 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 18 08:46:25.685634 master-0 kubenswrapper[4031]: I0318 08:46:25.684930 4031 kubelet.go:324] "Adding apiserver pod source" Mar 18 08:46:25.685634 master-0 kubenswrapper[4031]: I0318 08:46:25.684957 4031 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 18 08:46:25.689555 master-0 kubenswrapper[4031]: W0318 08:46:25.689335 4031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:25.689555 master-0 kubenswrapper[4031]: W0318 08:46:25.689420 4031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:25.689555 master-0 kubenswrapper[4031]: E0318 08:46:25.689472 4031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:25.689555 master-0 kubenswrapper[4031]: E0318 08:46:25.689508 4031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:25.691210 master-0 kubenswrapper[4031]: I0318 08:46:25.691166 4031 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 18 08:46:25.696853 master-0 kubenswrapper[4031]: I0318 08:46:25.696790 4031 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 18 08:46:25.698753 master-0 kubenswrapper[4031]: I0318 08:46:25.698699 4031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 18 08:46:25.698876 master-0 kubenswrapper[4031]: I0318 08:46:25.698774 4031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 18 08:46:25.698876 master-0 kubenswrapper[4031]: I0318 08:46:25.698801 4031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 18 08:46:25.698876 master-0 kubenswrapper[4031]: I0318 08:46:25.698820 4031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 18 08:46:25.698876 master-0 kubenswrapper[4031]: I0318 08:46:25.698840 4031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 18 08:46:25.698876 master-0 kubenswrapper[4031]: I0318 08:46:25.698868 4031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 18 08:46:25.698876 master-0 kubenswrapper[4031]: I0318 08:46:25.698883 4031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 18 08:46:25.699214 master-0 kubenswrapper[4031]: I0318 08:46:25.698898 4031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 18 08:46:25.699214 master-0 kubenswrapper[4031]: I0318 08:46:25.698916 4031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 18 08:46:25.699214 master-0 kubenswrapper[4031]: I0318 08:46:25.698932 4031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 18 08:46:25.699214 master-0 kubenswrapper[4031]: I0318 08:46:25.698968 4031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 18 08:46:25.699214 master-0 kubenswrapper[4031]: I0318 08:46:25.698993 4031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 18 08:46:25.699214 master-0 kubenswrapper[4031]: I0318 08:46:25.699052 4031 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 18 08:46:25.702522 master-0 kubenswrapper[4031]: I0318 08:46:25.702461 4031 server.go:1280] "Started kubelet" Mar 18 08:46:25.702659 master-0 kubenswrapper[4031]: I0318 08:46:25.702512 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:25.704306 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 18 08:46:25.705517 master-0 kubenswrapper[4031]: I0318 08:46:25.703588 4031 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 18 08:46:25.705517 master-0 kubenswrapper[4031]: I0318 08:46:25.704367 4031 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 18 08:46:25.705517 master-0 kubenswrapper[4031]: I0318 08:46:25.703592 4031 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 18 08:46:25.705517 master-0 kubenswrapper[4031]: I0318 08:46:25.704964 4031 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 18 08:46:25.706420 master-0 kubenswrapper[4031]: I0318 08:46:25.706361 4031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 18 08:46:25.706420 master-0 kubenswrapper[4031]: I0318 08:46:25.706409 4031 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 18 08:46:25.706645 master-0 kubenswrapper[4031]: I0318 08:46:25.706584 4031 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 18 08:46:25.706645 master-0 kubenswrapper[4031]: I0318 08:46:25.706608 4031 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 18 08:46:25.706806 master-0 kubenswrapper[4031]: I0318 08:46:25.706764 4031 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 18 08:46:25.706878 master-0 kubenswrapper[4031]: E0318 08:46:25.706804 4031 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:25.707732 master-0 kubenswrapper[4031]: W0318 08:46:25.707648 4031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:25.707858 master-0 kubenswrapper[4031]: E0318 08:46:25.707743 4031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:25.707976 master-0 kubenswrapper[4031]: I0318 08:46:25.707941 4031 reconstruct.go:97] "Volume reconstruction finished" Mar 18 08:46:25.707976 master-0 kubenswrapper[4031]: I0318 08:46:25.707966 4031 reconciler.go:26] "Reconciler: start to sync state" Mar 18 08:46:25.708771 master-0 kubenswrapper[4031]: E0318 08:46:25.708709 4031 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 18 08:46:25.710824 master-0 kubenswrapper[4031]: I0318 08:46:25.710785 4031 factory.go:55] Registering systemd factory Mar 18 08:46:25.711023 master-0 kubenswrapper[4031]: I0318 08:46:25.710997 4031 factory.go:221] Registration of the systemd container factory successfully Mar 18 08:46:25.711419 master-0 kubenswrapper[4031]: I0318 08:46:25.711397 4031 factory.go:153] Registering CRI-O factory Mar 18 08:46:25.711546 master-0 kubenswrapper[4031]: I0318 08:46:25.711527 4031 factory.go:221] Registration of the crio container factory successfully Mar 18 08:46:25.711770 master-0 kubenswrapper[4031]: I0318 08:46:25.711748 4031 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 18 08:46:25.711936 master-0 kubenswrapper[4031]: I0318 08:46:25.711912 4031 factory.go:103] Registering Raw factory Mar 18 08:46:25.712063 master-0 kubenswrapper[4031]: I0318 08:46:25.712046 4031 manager.go:1196] Started watching for new ooms in manager Mar 18 08:46:25.713209 master-0 kubenswrapper[4031]: I0318 08:46:25.713179 4031 manager.go:319] Starting recovery of all containers Mar 18 08:46:25.714129 master-0 kubenswrapper[4031]: I0318 08:46:25.714082 4031 server.go:449] "Adding debug handlers to kubelet server" Mar 18 08:46:25.716376 master-0 kubenswrapper[4031]: E0318 08:46:25.708689 4031 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189de327300089d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.7024148 +0000 UTC m=+0.631939850,LastTimestamp:2026-03-18 08:46:25.7024148 +0000 UTC m=+0.631939850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:25.722071 master-0 kubenswrapper[4031]: E0318 08:46:25.722019 4031 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 18 08:46:25.736939 master-0 kubenswrapper[4031]: I0318 08:46:25.736721 4031 manager.go:324] Recovery completed Mar 18 08:46:25.744781 master-0 kubenswrapper[4031]: I0318 08:46:25.744725 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:25.750614 master-0 kubenswrapper[4031]: I0318 08:46:25.750465 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:25.750614 master-0 kubenswrapper[4031]: I0318 08:46:25.750605 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:25.750614 master-0 kubenswrapper[4031]: I0318 08:46:25.750626 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:25.752324 master-0 kubenswrapper[4031]: I0318 08:46:25.752260 4031 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 18 08:46:25.752324 master-0 kubenswrapper[4031]: I0318 08:46:25.752295 4031 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 18 08:46:25.752593 master-0 kubenswrapper[4031]: I0318 08:46:25.752358 4031 state_mem.go:36] "Initialized new in-memory state store" Mar 18 08:46:25.794874 master-0 kubenswrapper[4031]: I0318 08:46:25.794764 4031 policy_none.go:49] "None policy: Start" Mar 18 08:46:25.795893 master-0 kubenswrapper[4031]: I0318 08:46:25.795873 4031 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 18 08:46:25.795989 master-0 kubenswrapper[4031]: I0318 08:46:25.795976 4031 state_mem.go:35] "Initializing new in-memory state store" Mar 18 08:46:25.807906 master-0 kubenswrapper[4031]: E0318 08:46:25.807816 4031 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:25.909493 master-0 kubenswrapper[4031]: I0318 08:46:25.888976 4031 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 18 08:46:25.909493 master-0 kubenswrapper[4031]: I0318 08:46:25.891191 4031 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 18 08:46:25.909493 master-0 kubenswrapper[4031]: I0318 08:46:25.891277 4031 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 18 08:46:25.909493 master-0 kubenswrapper[4031]: I0318 08:46:25.891314 4031 kubelet.go:2335] "Starting kubelet main sync loop" Mar 18 08:46:25.909493 master-0 kubenswrapper[4031]: E0318 08:46:25.891393 4031 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 18 08:46:25.909493 master-0 kubenswrapper[4031]: W0318 08:46:25.892822 4031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:25.909493 master-0 kubenswrapper[4031]: E0318 08:46:25.893079 4031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:25.909493 master-0 kubenswrapper[4031]: I0318 08:46:25.900095 4031 manager.go:334] "Starting Device Plugin manager" Mar 18 08:46:25.909493 master-0 kubenswrapper[4031]: I0318 08:46:25.900166 4031 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 18 08:46:25.909493 master-0 kubenswrapper[4031]: I0318 08:46:25.900193 4031 server.go:79] "Starting device plugin registration server" Mar 18 08:46:25.909493 master-0 kubenswrapper[4031]: I0318 08:46:25.900805 4031 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 18 08:46:25.909493 master-0 kubenswrapper[4031]: I0318 08:46:25.900829 4031 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 18 08:46:25.909493 master-0 kubenswrapper[4031]: I0318 08:46:25.901009 4031 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 18 08:46:25.909493 master-0 kubenswrapper[4031]: I0318 08:46:25.901647 4031 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 18 08:46:25.909493 master-0 kubenswrapper[4031]: I0318 08:46:25.901660 4031 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 18 08:46:25.909493 master-0 kubenswrapper[4031]: E0318 08:46:25.902473 4031 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 08:46:25.910761 master-0 kubenswrapper[4031]: E0318 08:46:25.910701 4031 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 18 08:46:25.991963 master-0 kubenswrapper[4031]: I0318 08:46:25.991843 4031 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 08:46:25.991963 master-0 kubenswrapper[4031]: I0318 08:46:25.991957 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:25.993783 master-0 kubenswrapper[4031]: I0318 08:46:25.993722 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:25.993926 master-0 kubenswrapper[4031]: I0318 08:46:25.993813 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:25.993926 master-0 kubenswrapper[4031]: I0318 08:46:25.993838 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:25.994201 master-0 kubenswrapper[4031]: I0318 08:46:25.994151 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:25.994532 master-0 kubenswrapper[4031]: I0318 08:46:25.994471 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:46:25.994681 master-0 kubenswrapper[4031]: I0318 08:46:25.994542 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:25.995846 master-0 kubenswrapper[4031]: I0318 08:46:25.995796 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:25.995846 master-0 kubenswrapper[4031]: I0318 08:46:25.995832 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:25.996055 master-0 kubenswrapper[4031]: I0318 08:46:25.995852 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:25.996055 master-0 kubenswrapper[4031]: I0318 08:46:25.995976 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:25.996466 master-0 kubenswrapper[4031]: I0318 08:46:25.996427 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:46:25.996609 master-0 kubenswrapper[4031]: I0318 08:46:25.996472 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:25.996917 master-0 kubenswrapper[4031]: I0318 08:46:25.996872 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:25.997135 master-0 kubenswrapper[4031]: I0318 08:46:25.997104 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:25.997318 master-0 kubenswrapper[4031]: I0318 08:46:25.997291 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:25.997502 master-0 kubenswrapper[4031]: I0318 08:46:25.997425 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:25.997656 master-0 kubenswrapper[4031]: I0318 08:46:25.997503 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:25.997656 master-0 kubenswrapper[4031]: I0318 08:46:25.997544 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:25.997656 master-0 kubenswrapper[4031]: I0318 08:46:25.997560 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:25.997656 master-0 kubenswrapper[4031]: I0318 08:46:25.997511 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:25.998233 master-0 kubenswrapper[4031]: I0318 08:46:25.997686 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:25.998233 master-0 kubenswrapper[4031]: I0318 08:46:25.997830 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:25.998233 master-0 kubenswrapper[4031]: I0318 08:46:25.997901 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:25.998233 master-0 kubenswrapper[4031]: I0318 08:46:25.997955 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:25.999020 master-0 kubenswrapper[4031]: I0318 08:46:25.998957 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:25.999020 master-0 kubenswrapper[4031]: I0318 08:46:25.998997 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:25.999020 master-0 kubenswrapper[4031]: I0318 08:46:25.999014 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:25.999235 master-0 kubenswrapper[4031]: I0318 08:46:25.999087 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:25.999235 master-0 kubenswrapper[4031]: I0318 08:46:25.999121 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:25.999235 master-0 kubenswrapper[4031]: I0318 08:46:25.999138 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:25.999235 master-0 kubenswrapper[4031]: I0318 08:46:25.999163 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:25.999477 master-0 kubenswrapper[4031]: I0318 08:46:25.999231 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:25.999477 master-0 kubenswrapper[4031]: I0318 08:46:25.999439 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:26.000210 master-0 kubenswrapper[4031]: I0318 08:46:26.000158 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:26.000210 master-0 kubenswrapper[4031]: I0318 08:46:26.000205 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:26.000397 master-0 kubenswrapper[4031]: I0318 08:46:26.000221 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:26.000397 master-0 kubenswrapper[4031]: I0318 08:46:26.000365 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:46:26.000397 master-0 kubenswrapper[4031]: I0318 08:46:26.000394 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:26.000811 master-0 kubenswrapper[4031]: I0318 08:46:26.000741 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:26.000915 master-0 kubenswrapper[4031]: I0318 08:46:26.000856 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:26.000915 master-0 kubenswrapper[4031]: I0318 08:46:26.000896 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:26.001033 master-0 kubenswrapper[4031]: I0318 08:46:26.000959 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:26.001430 master-0 kubenswrapper[4031]: I0318 08:46:26.001374 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:26.001515 master-0 kubenswrapper[4031]: I0318 08:46:26.001436 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:26.001515 master-0 kubenswrapper[4031]: I0318 08:46:26.001461 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:26.001919 master-0 kubenswrapper[4031]: I0318 08:46:26.001873 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:26.002010 master-0 kubenswrapper[4031]: I0318 08:46:26.001924 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:26.002010 master-0 kubenswrapper[4031]: I0318 08:46:26.001948 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:26.002010 master-0 kubenswrapper[4031]: I0318 08:46:26.001989 4031 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 08:46:26.003140 master-0 kubenswrapper[4031]: E0318 08:46:26.003078 4031 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 08:46:26.009318 master-0 kubenswrapper[4031]: I0318 08:46:26.009258 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:26.009318 master-0 kubenswrapper[4031]: I0318 08:46:26.009315 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:46:26.009530 master-0 kubenswrapper[4031]: I0318 08:46:26.009346 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:46:26.009530 master-0 kubenswrapper[4031]: I0318 08:46:26.009381 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:26.009530 master-0 kubenswrapper[4031]: I0318 08:46:26.009412 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:46:26.009791 master-0 kubenswrapper[4031]: I0318 08:46:26.009524 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:26.009791 master-0 kubenswrapper[4031]: I0318 08:46:26.009672 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:26.009791 master-0 kubenswrapper[4031]: I0318 08:46:26.009712 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:26.009791 master-0 kubenswrapper[4031]: I0318 08:46:26.009776 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:46:26.010064 master-0 kubenswrapper[4031]: I0318 08:46:26.009823 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:46:26.010064 master-0 kubenswrapper[4031]: I0318 08:46:26.009864 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:46:26.010064 master-0 kubenswrapper[4031]: I0318 08:46:26.009911 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:26.010064 master-0 kubenswrapper[4031]: I0318 08:46:26.009943 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:26.010064 master-0 kubenswrapper[4031]: I0318 08:46:26.010009 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:26.010064 master-0 kubenswrapper[4031]: I0318 08:46:26.010069 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:26.010518 master-0 kubenswrapper[4031]: I0318 08:46:26.010106 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:26.010518 master-0 kubenswrapper[4031]: I0318 08:46:26.010140 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:26.110445 master-0 kubenswrapper[4031]: I0318 08:46:26.110366 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:26.110445 master-0 kubenswrapper[4031]: I0318 08:46:26.110437 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:26.110708 master-0 kubenswrapper[4031]: I0318 08:46:26.110475 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:26.110708 master-0 kubenswrapper[4031]: I0318 08:46:26.110646 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:26.110831 master-0 kubenswrapper[4031]: I0318 08:46:26.110755 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:26.110831 master-0 kubenswrapper[4031]: I0318 08:46:26.110824 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:26.110945 master-0 kubenswrapper[4031]: I0318 08:46:26.110846 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:26.110945 master-0 kubenswrapper[4031]: I0318 08:46:26.110921 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:26.111054 master-0 kubenswrapper[4031]: I0318 08:46:26.110981 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:46:26.111054 master-0 kubenswrapper[4031]: I0318 08:46:26.111032 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:46:26.111054 master-0 kubenswrapper[4031]: I0318 08:46:26.111042 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:46:26.111230 master-0 kubenswrapper[4031]: I0318 08:46:26.111065 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:26.111230 master-0 kubenswrapper[4031]: I0318 08:46:26.111099 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:46:26.111230 master-0 kubenswrapper[4031]: I0318 08:46:26.111100 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:26.111230 master-0 kubenswrapper[4031]: I0318 08:46:26.111157 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:26.111230 master-0 kubenswrapper[4031]: I0318 08:46:26.111162 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:46:26.111230 master-0 kubenswrapper[4031]: I0318 08:46:26.111198 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:46:26.111230 master-0 kubenswrapper[4031]: I0318 08:46:26.111223 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:26.111626 master-0 kubenswrapper[4031]: I0318 08:46:26.111262 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:26.111626 master-0 kubenswrapper[4031]: I0318 08:46:26.111267 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:26.111626 master-0 kubenswrapper[4031]: I0318 08:46:26.111295 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:46:26.111626 master-0 kubenswrapper[4031]: I0318 08:46:26.111342 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:46:26.111626 master-0 kubenswrapper[4031]: I0318 08:46:26.111382 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:26.111626 master-0 kubenswrapper[4031]: I0318 08:46:26.111402 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:26.111626 master-0 kubenswrapper[4031]: I0318 08:46:26.111469 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:26.111626 master-0 kubenswrapper[4031]: I0318 08:46:26.111545 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:26.112107 master-0 kubenswrapper[4031]: I0318 08:46:26.111656 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:26.112107 master-0 kubenswrapper[4031]: I0318 08:46:26.111686 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:26.112107 master-0 kubenswrapper[4031]: I0318 08:46:26.111781 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:46:26.112107 master-0 kubenswrapper[4031]: I0318 08:46:26.111837 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:46:26.112107 master-0 kubenswrapper[4031]: I0318 08:46:26.111725 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:26.112107 master-0 kubenswrapper[4031]: I0318 08:46:26.111786 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:26.112107 master-0 kubenswrapper[4031]: I0318 08:46:26.111918 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:46:26.112107 master-0 kubenswrapper[4031]: I0318 08:46:26.112004 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:46:26.117533 master-0 kubenswrapper[4031]: I0318 08:46:26.117486 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:26.204216 master-0 kubenswrapper[4031]: I0318 08:46:26.204111 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:26.205852 master-0 kubenswrapper[4031]: I0318 08:46:26.205756 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:26.205967 master-0 kubenswrapper[4031]: I0318 08:46:26.205859 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:26.205967 master-0 kubenswrapper[4031]: I0318 08:46:26.205911 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:26.206082 master-0 kubenswrapper[4031]: I0318 08:46:26.206050 4031 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 08:46:26.207863 master-0 kubenswrapper[4031]: E0318 08:46:26.207757 4031 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 08:46:26.313087 master-0 kubenswrapper[4031]: E0318 08:46:26.312886 4031 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 18 08:46:26.344227 master-0 kubenswrapper[4031]: I0318 08:46:26.344075 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:46:26.360519 master-0 kubenswrapper[4031]: I0318 08:46:26.360447 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:46:26.385094 master-0 kubenswrapper[4031]: I0318 08:46:26.385009 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:26.406255 master-0 kubenswrapper[4031]: I0318 08:46:26.406172 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:46:26.608666 master-0 kubenswrapper[4031]: I0318 08:46:26.608461 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:26.610279 master-0 kubenswrapper[4031]: I0318 08:46:26.610220 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:26.610279 master-0 kubenswrapper[4031]: I0318 08:46:26.610282 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:26.610422 master-0 kubenswrapper[4031]: I0318 08:46:26.610305 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:26.610422 master-0 kubenswrapper[4031]: I0318 08:46:26.610374 4031 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 08:46:26.611511 master-0 kubenswrapper[4031]: E0318 08:46:26.611411 4031 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 08:46:26.653880 master-0 kubenswrapper[4031]: W0318 08:46:26.653754 4031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:26.654020 master-0 kubenswrapper[4031]: E0318 08:46:26.653870 4031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:26.703996 master-0 kubenswrapper[4031]: I0318 08:46:26.703915 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:26.816183 master-0 kubenswrapper[4031]: W0318 08:46:26.816066 4031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:26.816398 master-0 kubenswrapper[4031]: E0318 08:46:26.816190 4031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:26.885834 master-0 kubenswrapper[4031]: W0318 08:46:26.885711 4031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46f265536aba6292ead501bc9b49f327.slice/crio-a31ec04000ce15454ff93da1265406bc1c4ac93430867ce448a6dc1acd1c1097 WatchSource:0}: Error finding container a31ec04000ce15454ff93da1265406bc1c4ac93430867ce448a6dc1acd1c1097: Status 404 returned error can't find the container with id a31ec04000ce15454ff93da1265406bc1c4ac93430867ce448a6dc1acd1c1097 Mar 18 08:46:26.889201 master-0 kubenswrapper[4031]: W0318 08:46:26.889079 4031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd664a6d0d2a24360dee10612610f1b59.slice/crio-9eeddb357e31077214bc6fef49178f88b5d294912702d649ea4a30b26a11e0ed WatchSource:0}: Error finding container 9eeddb357e31077214bc6fef49178f88b5d294912702d649ea4a30b26a11e0ed: Status 404 returned error can't find the container with id 9eeddb357e31077214bc6fef49178f88b5d294912702d649ea4a30b26a11e0ed Mar 18 08:46:26.895474 master-0 kubenswrapper[4031]: I0318 08:46:26.895376 4031 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 08:46:26.897655 master-0 kubenswrapper[4031]: I0318 08:46:26.897443 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"9eeddb357e31077214bc6fef49178f88b5d294912702d649ea4a30b26a11e0ed"} Mar 18 08:46:26.899730 master-0 kubenswrapper[4031]: I0318 08:46:26.899665 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"a31ec04000ce15454ff93da1265406bc1c4ac93430867ce448a6dc1acd1c1097"} Mar 18 08:46:26.904965 master-0 kubenswrapper[4031]: W0318 08:46:26.904877 4031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49fac1b46a11e49501805e891baae4a9.slice/crio-e9783b2b210b8a83d070b181fdbb2da8ba234da764756739e3354c4aa2f2e32b WatchSource:0}: Error finding container e9783b2b210b8a83d070b181fdbb2da8ba234da764756739e3354c4aa2f2e32b: Status 404 returned error can't find the container with id e9783b2b210b8a83d070b181fdbb2da8ba234da764756739e3354c4aa2f2e32b Mar 18 08:46:26.913142 master-0 kubenswrapper[4031]: W0318 08:46:26.913062 4031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc83737980b9ee109184b1d78e942cf36.slice/crio-ff2fe1000ac68ef028cd1879d1c2cf197302bdbc2feff6610b1fe10df6c0bb6d WatchSource:0}: Error finding container ff2fe1000ac68ef028cd1879d1c2cf197302bdbc2feff6610b1fe10df6c0bb6d: Status 404 returned error can't find the container with id ff2fe1000ac68ef028cd1879d1c2cf197302bdbc2feff6610b1fe10df6c0bb6d Mar 18 08:46:26.914919 master-0 kubenswrapper[4031]: W0318 08:46:26.914858 4031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1249822f86f23526277d165c0d5d3c19.slice/crio-b949a573f922a32d775357bfcb732ec6a7748990727195e8e189e898f3802768 WatchSource:0}: Error finding container b949a573f922a32d775357bfcb732ec6a7748990727195e8e189e898f3802768: Status 404 returned error can't find the container with id b949a573f922a32d775357bfcb732ec6a7748990727195e8e189e898f3802768 Mar 18 08:46:26.966506 master-0 kubenswrapper[4031]: W0318 08:46:26.966390 4031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:26.966506 master-0 kubenswrapper[4031]: E0318 08:46:26.966501 4031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:27.114354 master-0 kubenswrapper[4031]: E0318 08:46:27.114185 4031 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 18 08:46:27.413004 master-0 kubenswrapper[4031]: I0318 08:46:27.412315 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:27.413724 master-0 kubenswrapper[4031]: I0318 08:46:27.413674 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:27.413881 master-0 kubenswrapper[4031]: I0318 08:46:27.413732 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:27.413881 master-0 kubenswrapper[4031]: I0318 08:46:27.413755 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:27.413881 master-0 kubenswrapper[4031]: I0318 08:46:27.413823 4031 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 08:46:27.415506 master-0 kubenswrapper[4031]: E0318 08:46:27.415428 4031 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 08:46:27.441771 master-0 kubenswrapper[4031]: W0318 08:46:27.441610 4031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:27.441771 master-0 kubenswrapper[4031]: E0318 08:46:27.441706 4031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:27.552705 master-0 kubenswrapper[4031]: E0318 08:46:27.552496 4031 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189de327300089d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.7024148 +0000 UTC m=+0.631939850,LastTimestamp:2026-03-18 08:46:25.7024148 +0000 UTC m=+0.631939850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:27.679390 master-0 kubenswrapper[4031]: I0318 08:46:27.679274 4031 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 08:46:27.680428 master-0 kubenswrapper[4031]: E0318 08:46:27.680402 4031 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:27.703626 master-0 kubenswrapper[4031]: I0318 08:46:27.703578 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:27.902975 master-0 kubenswrapper[4031]: I0318 08:46:27.902908 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"b949a573f922a32d775357bfcb732ec6a7748990727195e8e189e898f3802768"} Mar 18 08:46:27.903990 master-0 kubenswrapper[4031]: I0318 08:46:27.903938 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"ff2fe1000ac68ef028cd1879d1c2cf197302bdbc2feff6610b1fe10df6c0bb6d"} Mar 18 08:46:27.904983 master-0 kubenswrapper[4031]: I0318 08:46:27.904957 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"e9783b2b210b8a83d070b181fdbb2da8ba234da764756739e3354c4aa2f2e32b"} Mar 18 08:46:28.679869 master-0 kubenswrapper[4031]: W0318 08:46:28.679493 4031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:28.680607 master-0 kubenswrapper[4031]: E0318 08:46:28.679875 4031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:28.704802 master-0 kubenswrapper[4031]: I0318 08:46:28.704737 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:28.715803 master-0 kubenswrapper[4031]: E0318 08:46:28.715734 4031 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 18 08:46:28.911839 master-0 kubenswrapper[4031]: I0318 08:46:28.911661 4031 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="d5fdea15855020c7a6ace295d323d168cc8f0fab3f1b0678b2b4dd54d4267ce4" exitCode=0 Mar 18 08:46:28.911839 master-0 kubenswrapper[4031]: I0318 08:46:28.911760 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"d5fdea15855020c7a6ace295d323d168cc8f0fab3f1b0678b2b4dd54d4267ce4"} Mar 18 08:46:28.912054 master-0 kubenswrapper[4031]: I0318 08:46:28.911950 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:28.913414 master-0 kubenswrapper[4031]: I0318 08:46:28.913354 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:28.913414 master-0 kubenswrapper[4031]: I0318 08:46:28.913407 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:28.913516 master-0 kubenswrapper[4031]: I0318 08:46:28.913430 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:29.016611 master-0 kubenswrapper[4031]: I0318 08:46:29.016548 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:29.017717 master-0 kubenswrapper[4031]: I0318 08:46:29.017666 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:29.017782 master-0 kubenswrapper[4031]: I0318 08:46:29.017733 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:29.017782 master-0 kubenswrapper[4031]: I0318 08:46:29.017751 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:29.017860 master-0 kubenswrapper[4031]: I0318 08:46:29.017836 4031 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 08:46:29.019354 master-0 kubenswrapper[4031]: E0318 08:46:29.019315 4031 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 08:46:29.163037 master-0 kubenswrapper[4031]: W0318 08:46:29.162929 4031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:29.163037 master-0 kubenswrapper[4031]: E0318 08:46:29.162977 4031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:29.199606 master-0 kubenswrapper[4031]: W0318 08:46:29.199491 4031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:29.199768 master-0 kubenswrapper[4031]: E0318 08:46:29.199603 4031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:29.371998 master-0 kubenswrapper[4031]: W0318 08:46:29.371921 4031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:29.371998 master-0 kubenswrapper[4031]: E0318 08:46:29.371971 4031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:29.703728 master-0 kubenswrapper[4031]: I0318 08:46:29.703664 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:29.915219 master-0 kubenswrapper[4031]: I0318 08:46:29.915178 4031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/0.log" Mar 18 08:46:29.915655 master-0 kubenswrapper[4031]: I0318 08:46:29.915617 4031 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="7be8e8386f258c98b7cef80ae0b7a78c8d360557eaa37f236d394a8770dc0b20" exitCode=1 Mar 18 08:46:29.915717 master-0 kubenswrapper[4031]: I0318 08:46:29.915666 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"7be8e8386f258c98b7cef80ae0b7a78c8d360557eaa37f236d394a8770dc0b20"} Mar 18 08:46:29.915784 master-0 kubenswrapper[4031]: I0318 08:46:29.915744 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:29.916758 master-0 kubenswrapper[4031]: I0318 08:46:29.916733 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:29.916818 master-0 kubenswrapper[4031]: I0318 08:46:29.916769 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:29.916818 master-0 kubenswrapper[4031]: I0318 08:46:29.916782 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:29.917075 master-0 kubenswrapper[4031]: I0318 08:46:29.917056 4031 scope.go:117] "RemoveContainer" containerID="7be8e8386f258c98b7cef80ae0b7a78c8d360557eaa37f236d394a8770dc0b20" Mar 18 08:46:30.703901 master-0 kubenswrapper[4031]: I0318 08:46:30.703862 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:30.919940 master-0 kubenswrapper[4031]: I0318 08:46:30.919907 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"0d687a1cc19edef86e363b078065411464f70598e58ee47fda14e020c46c6047"} Mar 18 08:46:30.921610 master-0 kubenswrapper[4031]: I0318 08:46:30.921459 4031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 18 08:46:30.922375 master-0 kubenswrapper[4031]: I0318 08:46:30.922343 4031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/0.log" Mar 18 08:46:30.925440 master-0 kubenswrapper[4031]: I0318 08:46:30.925385 4031 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="0ce23a43327bf85344d980658053cf1798050df895d3a5f0357e5ef05399959b" exitCode=1 Mar 18 08:46:30.925518 master-0 kubenswrapper[4031]: I0318 08:46:30.925449 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"0ce23a43327bf85344d980658053cf1798050df895d3a5f0357e5ef05399959b"} Mar 18 08:46:30.925518 master-0 kubenswrapper[4031]: I0318 08:46:30.925508 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:30.925620 master-0 kubenswrapper[4031]: I0318 08:46:30.925535 4031 scope.go:117] "RemoveContainer" containerID="7be8e8386f258c98b7cef80ae0b7a78c8d360557eaa37f236d394a8770dc0b20" Mar 18 08:46:30.926382 master-0 kubenswrapper[4031]: I0318 08:46:30.926346 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:30.926439 master-0 kubenswrapper[4031]: I0318 08:46:30.926387 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:30.926439 master-0 kubenswrapper[4031]: I0318 08:46:30.926398 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:30.926759 master-0 kubenswrapper[4031]: I0318 08:46:30.926739 4031 scope.go:117] "RemoveContainer" containerID="0ce23a43327bf85344d980658053cf1798050df895d3a5f0357e5ef05399959b" Mar 18 08:46:30.926897 master-0 kubenswrapper[4031]: E0318 08:46:30.926875 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 08:46:31.704169 master-0 kubenswrapper[4031]: I0318 08:46:31.704095 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:31.874859 master-0 kubenswrapper[4031]: I0318 08:46:31.874755 4031 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 08:46:31.875876 master-0 kubenswrapper[4031]: E0318 08:46:31.875823 4031 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:31.917380 master-0 kubenswrapper[4031]: E0318 08:46:31.917286 4031 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 18 08:46:31.955686 master-0 kubenswrapper[4031]: I0318 08:46:31.955487 4031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 18 08:46:31.957706 master-0 kubenswrapper[4031]: I0318 08:46:31.957663 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:31.958615 master-0 kubenswrapper[4031]: I0318 08:46:31.958532 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:31.958615 master-0 kubenswrapper[4031]: I0318 08:46:31.958577 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:31.958615 master-0 kubenswrapper[4031]: I0318 08:46:31.958585 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:31.958890 master-0 kubenswrapper[4031]: I0318 08:46:31.958830 4031 scope.go:117] "RemoveContainer" containerID="0ce23a43327bf85344d980658053cf1798050df895d3a5f0357e5ef05399959b" Mar 18 08:46:31.958973 master-0 kubenswrapper[4031]: E0318 08:46:31.958949 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 08:46:31.960076 master-0 kubenswrapper[4031]: I0318 08:46:31.960021 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"e2df9cabd8b88e525a338baa668bc5602df79a0ef876cd2f299ee939dbfff1e9"} Mar 18 08:46:31.960076 master-0 kubenswrapper[4031]: I0318 08:46:31.960079 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:31.960834 master-0 kubenswrapper[4031]: I0318 08:46:31.960789 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:31.960834 master-0 kubenswrapper[4031]: I0318 08:46:31.960812 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:31.960834 master-0 kubenswrapper[4031]: I0318 08:46:31.960819 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:32.060398 master-0 kubenswrapper[4031]: W0318 08:46:32.060324 4031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:32.060596 master-0 kubenswrapper[4031]: E0318 08:46:32.060403 4031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:32.232404 master-0 kubenswrapper[4031]: I0318 08:46:32.232261 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:32.239962 master-0 kubenswrapper[4031]: I0318 08:46:32.239923 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:32.240044 master-0 kubenswrapper[4031]: I0318 08:46:32.239966 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:32.240044 master-0 kubenswrapper[4031]: I0318 08:46:32.239978 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:32.240044 master-0 kubenswrapper[4031]: I0318 08:46:32.240024 4031 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 08:46:32.240783 master-0 kubenswrapper[4031]: E0318 08:46:32.240743 4031 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 18 08:46:32.704810 master-0 kubenswrapper[4031]: I0318 08:46:32.704623 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:32.962350 master-0 kubenswrapper[4031]: I0318 08:46:32.962295 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:32.963478 master-0 kubenswrapper[4031]: I0318 08:46:32.963426 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:32.963671 master-0 kubenswrapper[4031]: I0318 08:46:32.963490 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:32.963671 master-0 kubenswrapper[4031]: I0318 08:46:32.963511 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:33.438058 master-0 kubenswrapper[4031]: W0318 08:46:33.437915 4031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:33.438058 master-0 kubenswrapper[4031]: E0318 08:46:33.438000 4031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:33.703753 master-0 kubenswrapper[4031]: I0318 08:46:33.703645 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:34.703801 master-0 kubenswrapper[4031]: I0318 08:46:34.703680 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:34.797644 master-0 kubenswrapper[4031]: W0318 08:46:34.797494 4031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:34.797808 master-0 kubenswrapper[4031]: E0318 08:46:34.797666 4031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:35.095739 master-0 kubenswrapper[4031]: W0318 08:46:35.095603 4031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:35.095739 master-0 kubenswrapper[4031]: E0318 08:46:35.095722 4031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 18 08:46:35.703934 master-0 kubenswrapper[4031]: I0318 08:46:35.703743 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 18 08:46:35.905582 master-0 kubenswrapper[4031]: E0318 08:46:35.902771 4031 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 08:46:35.972603 master-0 kubenswrapper[4031]: I0318 08:46:35.970722 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"0e74fe65579e23426bc0e51944122434e2b88b2a4dcfe52117fc70980e194f0d"} Mar 18 08:46:35.972603 master-0 kubenswrapper[4031]: I0318 08:46:35.970781 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:35.972603 master-0 kubenswrapper[4031]: I0318 08:46:35.971800 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:35.972603 master-0 kubenswrapper[4031]: I0318 08:46:35.971847 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:35.972603 master-0 kubenswrapper[4031]: I0318 08:46:35.971856 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:35.974496 master-0 kubenswrapper[4031]: I0318 08:46:35.974329 4031 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="c0902a4169e07c094c9a3b99e9ad46a44edb13e670f8fb3c264aac643fba743d" exitCode=0 Mar 18 08:46:35.974496 master-0 kubenswrapper[4031]: I0318 08:46:35.974402 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerDied","Data":"c0902a4169e07c094c9a3b99e9ad46a44edb13e670f8fb3c264aac643fba743d"} Mar 18 08:46:35.974496 master-0 kubenswrapper[4031]: I0318 08:46:35.974439 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:35.975375 master-0 kubenswrapper[4031]: I0318 08:46:35.975351 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:35.975441 master-0 kubenswrapper[4031]: I0318 08:46:35.975391 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:35.975441 master-0 kubenswrapper[4031]: I0318 08:46:35.975404 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:35.976152 master-0 kubenswrapper[4031]: I0318 08:46:35.976108 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"3723d82df6a282e88b524b3a08afe8873f1f72923890a0d6f5612d293d44a84b"} Mar 18 08:46:35.980210 master-0 kubenswrapper[4031]: I0318 08:46:35.980177 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:35.980960 master-0 kubenswrapper[4031]: I0318 08:46:35.980937 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:35.980960 master-0 kubenswrapper[4031]: I0318 08:46:35.980961 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:35.981044 master-0 kubenswrapper[4031]: I0318 08:46:35.980971 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:36.987395 master-0 kubenswrapper[4031]: I0318 08:46:36.987341 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"e66d51cf8147f2ef1dd8f8cd73d79140962d6bcce6a8aaa4c5456711dcd4f71a"} Mar 18 08:46:36.989097 master-0 kubenswrapper[4031]: I0318 08:46:36.989044 4031 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="3723d82df6a282e88b524b3a08afe8873f1f72923890a0d6f5612d293d44a84b" exitCode=1 Mar 18 08:46:36.989191 master-0 kubenswrapper[4031]: I0318 08:46:36.989132 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"3723d82df6a282e88b524b3a08afe8873f1f72923890a0d6f5612d293d44a84b"} Mar 18 08:46:36.989191 master-0 kubenswrapper[4031]: I0318 08:46:36.989142 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:36.990022 master-0 kubenswrapper[4031]: I0318 08:46:36.989958 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:36.990022 master-0 kubenswrapper[4031]: I0318 08:46:36.990013 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:36.990022 master-0 kubenswrapper[4031]: I0318 08:46:36.990045 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:37.882011 master-0 kubenswrapper[4031]: E0318 08:46:37.881747 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de327300089d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.7024148 +0000 UTC m=+0.631939850,LastTimestamp:2026-03-18 08:46:25.7024148 +0000 UTC m=+0.631939850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:37.883847 master-0 kubenswrapper[4031]: I0318 08:46:37.882604 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:37.891125 master-0 kubenswrapper[4031]: E0318 08:46:37.890967 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" event="&Event{ObjectMeta:{master-0.189de32732df0358 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.750549336 +0000 UTC m=+0.680074386,LastTimestamp:2026-03-18 08:46:25.750549336 +0000 UTC m=+0.680074386,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:37.899057 master-0 kubenswrapper[4031]: E0318 08:46:37.898895 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de32732e0122d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.750618669 +0000 UTC m=+0.680143719,LastTimestamp:2026-03-18 08:46:25.750618669 +0000 UTC m=+0.680143719,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:37.903809 master-0 kubenswrapper[4031]: E0318 08:46:37.903696 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de32732e05a2b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.750637099 +0000 UTC m=+0.680162139,LastTimestamp:2026-03-18 08:46:25.750637099 +0000 UTC m=+0.680162139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:37.909937 master-0 kubenswrapper[4031]: E0318 08:46:37.909193 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de3273cc1c720 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.916405536 +0000 UTC m=+0.845930556,LastTimestamp:2026-03-18 08:46:25.916405536 +0000 UTC m=+0.845930556,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:37.969704 master-0 kubenswrapper[4031]: E0318 08:46:37.969585 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de32732df0358\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de32732df0358 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.750549336 +0000 UTC m=+0.680074386,LastTimestamp:2026-03-18 08:46:25.993787355 +0000 UTC m=+0.923312405,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:37.974438 master-0 kubenswrapper[4031]: E0318 08:46:37.974354 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de32732e0122d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de32732e0122d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.750618669 +0000 UTC m=+0.680143719,LastTimestamp:2026-03-18 08:46:25.993826466 +0000 UTC m=+0.923351506,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:37.981415 master-0 kubenswrapper[4031]: E0318 08:46:37.981257 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de32732e05a2b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de32732e05a2b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.750637099 +0000 UTC m=+0.680162139,LastTimestamp:2026-03-18 08:46:25.993879668 +0000 UTC m=+0.923404718,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:37.986098 master-0 kubenswrapper[4031]: E0318 08:46:37.985998 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de32732df0358\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de32732df0358 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.750549336 +0000 UTC m=+0.680074386,LastTimestamp:2026-03-18 08:46:25.995823583 +0000 UTC m=+0.925348623,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:37.990484 master-0 kubenswrapper[4031]: E0318 08:46:37.990363 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de32732e0122d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de32732e0122d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.750618669 +0000 UTC m=+0.680143719,LastTimestamp:2026-03-18 08:46:25.995844594 +0000 UTC m=+0.925369634,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:37.994772 master-0 kubenswrapper[4031]: E0318 08:46:37.994666 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de32732e05a2b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de32732e05a2b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.750637099 +0000 UTC m=+0.680162139,LastTimestamp:2026-03-18 08:46:25.995866194 +0000 UTC m=+0.925391234,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.011075 master-0 kubenswrapper[4031]: E0318 08:46:38.010943 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de32732df0358\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de32732df0358 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.750549336 +0000 UTC m=+0.680074386,LastTimestamp:2026-03-18 08:46:25.997062265 +0000 UTC m=+0.926587345,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.020783 master-0 kubenswrapper[4031]: E0318 08:46:38.020668 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de32732e0122d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de32732e0122d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.750618669 +0000 UTC m=+0.680143719,LastTimestamp:2026-03-18 08:46:25.997261801 +0000 UTC m=+0.926786851,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.025312 master-0 kubenswrapper[4031]: E0318 08:46:38.025216 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de32732e05a2b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de32732e05a2b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.750637099 +0000 UTC m=+0.680162139,LastTimestamp:2026-03-18 08:46:25.997440687 +0000 UTC m=+0.926965747,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.029152 master-0 kubenswrapper[4031]: E0318 08:46:38.029070 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de32732df0358\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de32732df0358 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.750549336 +0000 UTC m=+0.680074386,LastTimestamp:2026-03-18 08:46:25.997492409 +0000 UTC m=+0.927017459,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.032354 master-0 kubenswrapper[4031]: E0318 08:46:38.032289 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de32732df0358\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de32732df0358 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.750549336 +0000 UTC m=+0.680074386,LastTimestamp:2026-03-18 08:46:25.99753291 +0000 UTC m=+0.927057960,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.042017 master-0 kubenswrapper[4031]: E0318 08:46:38.037052 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de32732e0122d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de32732e0122d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.750618669 +0000 UTC m=+0.680143719,LastTimestamp:2026-03-18 08:46:25.997554431 +0000 UTC m=+0.927079481,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.045267 master-0 kubenswrapper[4031]: E0318 08:46:38.042662 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de32732e05a2b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de32732e05a2b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.750637099 +0000 UTC m=+0.680162139,LastTimestamp:2026-03-18 08:46:25.997618803 +0000 UTC m=+0.927143853,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.051328 master-0 kubenswrapper[4031]: E0318 08:46:38.051217 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de32732e0122d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de32732e0122d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.750618669 +0000 UTC m=+0.680143719,LastTimestamp:2026-03-18 08:46:25.997663555 +0000 UTC m=+0.927188605,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.072528 master-0 kubenswrapper[4031]: E0318 08:46:38.072404 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de32732e05a2b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de32732e05a2b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.750637099 +0000 UTC m=+0.680162139,LastTimestamp:2026-03-18 08:46:25.997703786 +0000 UTC m=+0.927228836,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.080375 master-0 kubenswrapper[4031]: E0318 08:46:38.080282 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de32732df0358\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de32732df0358 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.750549336 +0000 UTC m=+0.680074386,LastTimestamp:2026-03-18 08:46:25.998988499 +0000 UTC m=+0.928513539,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.084207 master-0 kubenswrapper[4031]: E0318 08:46:38.084140 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de32732e0122d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de32732e0122d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.750618669 +0000 UTC m=+0.680143719,LastTimestamp:2026-03-18 08:46:25.999008 +0000 UTC m=+0.928533050,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.087662 master-0 kubenswrapper[4031]: E0318 08:46:38.087590 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de32732e05a2b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de32732e05a2b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.750637099 +0000 UTC m=+0.680162139,LastTimestamp:2026-03-18 08:46:25.99902438 +0000 UTC m=+0.928549420,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.091701 master-0 kubenswrapper[4031]: E0318 08:46:38.091629 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de32732df0358\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de32732df0358 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.750549336 +0000 UTC m=+0.680074386,LastTimestamp:2026-03-18 08:46:25.999111703 +0000 UTC m=+0.928636743,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.098861 master-0 kubenswrapper[4031]: E0318 08:46:38.097605 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189de32732e0122d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189de32732e0122d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:25.750618669 +0000 UTC m=+0.680143719,LastTimestamp:2026-03-18 08:46:25.999131904 +0000 UTC m=+0.928656954,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.103957 master-0 kubenswrapper[4031]: E0318 08:46:38.103842 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de327771a4bb0 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:26.895285168 +0000 UTC m=+1.824810218,LastTimestamp:2026-03-18 08:46:26.895285168 +0000 UTC m=+1.824810218,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.108400 master-0 kubenswrapper[4031]: E0318 08:46:38.108295 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189de327771dcab0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:26.895514288 +0000 UTC m=+1.825039338,LastTimestamp:2026-03-18 08:46:26.895514288 +0000 UTC m=+1.825039338,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.112662 master-0 kubenswrapper[4031]: E0318 08:46:38.112519 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de32777e73d49 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:26.908716361 +0000 UTC m=+1.838241401,LastTimestamp:2026-03-18 08:46:26.908716361 +0000 UTC m=+1.838241401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.117586 master-0 kubenswrapper[4031]: E0318 08:46:38.117445 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189de32778593aa7 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:26.916186791 +0000 UTC m=+1.845711841,LastTimestamp:2026-03-18 08:46:26.916186791 +0000 UTC m=+1.845711841,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.121998 master-0 kubenswrapper[4031]: E0318 08:46:38.121876 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de327788cb102 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:26.919559426 +0000 UTC m=+1.849084466,LastTimestamp:2026-03-18 08:46:26.919559426 +0000 UTC m=+1.849084466,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.125741 master-0 kubenswrapper[4031]: E0318 08:46:38.125624 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de327dacb9230 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" in 1.648s (1.648s including waiting). Image size: 465090934 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:28.567847472 +0000 UTC m=+3.497372482,LastTimestamp:2026-03-18 08:46:28.567847472 +0000 UTC m=+3.497372482,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.131220 master-0 kubenswrapper[4031]: E0318 08:46:38.130383 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de327e6420b2a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:28.760161066 +0000 UTC m=+3.689686106,LastTimestamp:2026-03-18 08:46:28.760161066 +0000 UTC m=+3.689686106,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.135652 master-0 kubenswrapper[4031]: E0318 08:46:38.135482 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de327e72f09f2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:28.775692786 +0000 UTC m=+3.705217836,LastTimestamp:2026-03-18 08:46:28.775692786 +0000 UTC m=+3.705217836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.139684 master-0 kubenswrapper[4031]: E0318 08:46:38.139604 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de327efb37ac3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:28.918590147 +0000 UTC m=+3.848115187,LastTimestamp:2026-03-18 08:46:28.918590147 +0000 UTC m=+3.848115187,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.144199 master-0 kubenswrapper[4031]: E0318 08:46:38.144085 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de327fe4067d4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:29.1627069 +0000 UTC m=+4.092231910,LastTimestamp:2026-03-18 08:46:29.1627069 +0000 UTC m=+4.092231910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.148088 master-0 kubenswrapper[4031]: E0318 08:46:38.148009 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de327fed3eae8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:29.172374248 +0000 UTC m=+4.101899258,LastTimestamp:2026-03-18 08:46:29.172374248 +0000 UTC m=+4.101899258,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.152490 master-0 kubenswrapper[4031]: E0318 08:46:38.152377 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189de327efb37ac3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de327efb37ac3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:28.918590147 +0000 UTC m=+3.848115187,LastTimestamp:2026-03-18 08:46:30.402222153 +0000 UTC m=+5.331747163,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.157311 master-0 kubenswrapper[4031]: E0318 08:46:38.157196 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189de3284b9779e4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\" in 3.564s (3.564s including waiting). Image size: 529326739 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:30.460258788 +0000 UTC m=+5.389783798,LastTimestamp:2026-03-18 08:46:30.460258788 +0000 UTC m=+5.389783798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.163589 master-0 kubenswrapper[4031]: E0318 08:46:38.161950 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189de327fe4067d4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de327fe4067d4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:29.1627069 +0000 UTC m=+4.092231910,LastTimestamp:2026-03-18 08:46:30.555602424 +0000 UTC m=+5.485127434,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.166194 master-0 kubenswrapper[4031]: E0318 08:46:38.165939 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189de327fed3eae8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de327fed3eae8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:29.172374248 +0000 UTC m=+4.101899258,LastTimestamp:2026-03-18 08:46:30.568061938 +0000 UTC m=+5.497586948,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.171805 master-0 kubenswrapper[4031]: E0318 08:46:38.171708 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189de328597eca53 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:30.693522003 +0000 UTC m=+5.623047013,LastTimestamp:2026-03-18 08:46:30.693522003 +0000 UTC m=+5.623047013,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.176452 master-0 kubenswrapper[4031]: E0318 08:46:38.176349 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189de3285ac040d4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:30.714589396 +0000 UTC m=+5.644114406,LastTimestamp:2026-03-18 08:46:30.714589396 +0000 UTC m=+5.644114406,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.180612 master-0 kubenswrapper[4031]: E0318 08:46:38.180498 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189de3285af3301f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:30.717927455 +0000 UTC m=+5.647452465,LastTimestamp:2026-03-18 08:46:30.717927455 +0000 UTC m=+5.647452465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.202961 master-0 kubenswrapper[4031]: E0318 08:46:38.185125 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189de328668a5b14 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:30.912383764 +0000 UTC m=+5.841908764,LastTimestamp:2026-03-18 08:46:30.912383764 +0000 UTC m=+5.841908764,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.202961 master-0 kubenswrapper[4031]: E0318 08:46:38.189391 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de32867671223 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:30.926848547 +0000 UTC m=+5.856373557,LastTimestamp:2026-03-18 08:46:30.926848547 +0000 UTC m=+5.856373557,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.236668 master-0 kubenswrapper[4031]: E0318 08:46:38.236513 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189de32867a96d29 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:30.931197225 +0000 UTC m=+5.860722275,LastTimestamp:2026-03-18 08:46:30.931197225 +0000 UTC m=+5.860722275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.240484 master-0 kubenswrapper[4031]: E0318 08:46:38.240400 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189de32867671223\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de32867671223 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:30.926848547 +0000 UTC m=+5.856373557,LastTimestamp:2026-03-18 08:46:31.958924682 +0000 UTC m=+6.888449692,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.244357 master-0 kubenswrapper[4031]: E0318 08:46:38.244271 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de32964213480 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 8.271s (8.271s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:35.166905472 +0000 UTC m=+10.096430492,LastTimestamp:2026-03-18 08:46:35.166905472 +0000 UTC m=+10.096430492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.248608 master-0 kubenswrapper[4031]: E0318 08:46:38.248503 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189de329668fc39c kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 8.291s (8.291s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:35.2077055 +0000 UTC m=+10.137230520,LastTimestamp:2026-03-18 08:46:35.2077055 +0000 UTC m=+10.137230520,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.254151 master-0 kubenswrapper[4031]: E0318 08:46:38.254073 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de32968f861fc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 8.339s (8.339s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:35.24811622 +0000 UTC m=+10.177641270,LastTimestamp:2026-03-18 08:46:35.24811622 +0000 UTC m=+10.177641270,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.257676 master-0 kubenswrapper[4031]: E0318 08:46:38.257557 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de329717c1796 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:35.390965654 +0000 UTC m=+10.320490684,LastTimestamp:2026-03-18 08:46:35.390965654 +0000 UTC m=+10.320490684,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.261573 master-0 kubenswrapper[4031]: E0318 08:46:38.261486 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de32972479a71 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:35.404302961 +0000 UTC m=+10.333828001,LastTimestamp:2026-03-18 08:46:35.404302961 +0000 UTC m=+10.333828001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.264920 master-0 kubenswrapper[4031]: E0318 08:46:38.264829 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de329725ab716 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:35.405555478 +0000 UTC m=+10.335080518,LastTimestamp:2026-03-18 08:46:35.405555478 +0000 UTC m=+10.335080518,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.268675 master-0 kubenswrapper[4031]: E0318 08:46:38.268530 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189de32973765847 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:35.424143431 +0000 UTC m=+10.353668451,LastTimestamp:2026-03-18 08:46:35.424143431 +0000 UTC m=+10.353668451,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.272860 master-0 kubenswrapper[4031]: E0318 08:46:38.272773 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189de3297456e983 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:35.438860675 +0000 UTC m=+10.368385695,LastTimestamp:2026-03-18 08:46:35.438860675 +0000 UTC m=+10.368385695,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.276210 master-0 kubenswrapper[4031]: E0318 08:46:38.276112 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de32977c8f305 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:35.496665861 +0000 UTC m=+10.426190911,LastTimestamp:2026-03-18 08:46:35.496665861 +0000 UTC m=+10.426190911,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.279498 master-0 kubenswrapper[4031]: E0318 08:46:38.279421 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de32978c44646 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:35.51313671 +0000 UTC m=+10.442661750,LastTimestamp:2026-03-18 08:46:35.51313671 +0000 UTC m=+10.442661750,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.282962 master-0 kubenswrapper[4031]: E0318 08:46:38.282896 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de32994994c1f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:35.980082207 +0000 UTC m=+10.909607227,LastTimestamp:2026-03-18 08:46:35.980082207 +0000 UTC m=+10.909607227,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.286224 master-0 kubenswrapper[4031]: E0318 08:46:38.286111 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de329a3cc81a2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:36.235096482 +0000 UTC m=+11.164621492,LastTimestamp:2026-03-18 08:46:36.235096482 +0000 UTC m=+11.164621492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.292670 master-0 kubenswrapper[4031]: E0318 08:46:38.292455 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de329a4c62a5d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:36.251458141 +0000 UTC m=+11.180983161,LastTimestamp:2026-03-18 08:46:36.251458141 +0000 UTC m=+11.180983161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.298346 master-0 kubenswrapper[4031]: E0318 08:46:38.298238 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de329a4d4cb51 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:36.252416849 +0000 UTC m=+11.181941859,LastTimestamp:2026-03-18 08:46:36.252416849 +0000 UTC m=+11.181941859,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:38.321265 master-0 kubenswrapper[4031]: E0318 08:46:38.321237 4031 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 18 08:46:38.641340 master-0 kubenswrapper[4031]: I0318 08:46:38.641294 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:38.642253 master-0 kubenswrapper[4031]: I0318 08:46:38.642219 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:38.642320 master-0 kubenswrapper[4031]: I0318 08:46:38.642263 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:38.642320 master-0 kubenswrapper[4031]: I0318 08:46:38.642312 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:38.642399 master-0 kubenswrapper[4031]: I0318 08:46:38.642365 4031 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 08:46:38.648351 master-0 kubenswrapper[4031]: E0318 08:46:38.648320 4031 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 18 08:46:38.749973 master-0 kubenswrapper[4031]: I0318 08:46:38.749913 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:39.013692 master-0 kubenswrapper[4031]: E0318 08:46:39.013548 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de32a4922eebb kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\" in 3.603s (3.603s including waiting). Image size: 505246690 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:39.009001147 +0000 UTC m=+13.938526177,LastTimestamp:2026-03-18 08:46:39.009001147 +0000 UTC m=+13.938526177,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:39.053845 master-0 kubenswrapper[4031]: E0318 08:46:39.053673 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de32a4b7e1eca openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" in 2.796s (2.796s including waiting). Image size: 514984269 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:39.048531658 +0000 UTC m=+13.978056678,LastTimestamp:2026-03-18 08:46:39.048531658 +0000 UTC m=+13.978056678,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:39.197469 master-0 kubenswrapper[4031]: E0318 08:46:39.197234 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de32a5418c2f7 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:39.192883959 +0000 UTC m=+14.122408969,LastTimestamp:2026-03-18 08:46:39.192883959 +0000 UTC m=+14.122408969,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:39.208413 master-0 kubenswrapper[4031]: E0318 08:46:39.208307 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de32a54c0a1c8 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:39.203885512 +0000 UTC m=+14.133410532,LastTimestamp:2026-03-18 08:46:39.203885512 +0000 UTC m=+14.133410532,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:39.225010 master-0 kubenswrapper[4031]: E0318 08:46:39.224895 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de32a55c43e5b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:39.220899419 +0000 UTC m=+14.150424449,LastTimestamp:2026-03-18 08:46:39.220899419 +0000 UTC m=+14.150424449,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:39.242885 master-0 kubenswrapper[4031]: E0318 08:46:39.242785 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189de32a569b7fd8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:39.235006424 +0000 UTC m=+14.164531434,LastTimestamp:2026-03-18 08:46:39.235006424 +0000 UTC m=+14.164531434,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:39.711356 master-0 kubenswrapper[4031]: I0318 08:46:39.711290 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:40.000461 master-0 kubenswrapper[4031]: I0318 08:46:40.000312 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"6ee1233c71c5af7063fbaa44082f3735eb10f8e872e4942be3116550d1869f80"} Mar 18 08:46:40.000461 master-0 kubenswrapper[4031]: I0318 08:46:40.000397 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:40.001911 master-0 kubenswrapper[4031]: I0318 08:46:40.001661 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:40.001911 master-0 kubenswrapper[4031]: I0318 08:46:40.001714 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:40.001911 master-0 kubenswrapper[4031]: I0318 08:46:40.001731 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:40.002141 master-0 kubenswrapper[4031]: I0318 08:46:40.002117 4031 scope.go:117] "RemoveContainer" containerID="3723d82df6a282e88b524b3a08afe8873f1f72923890a0d6f5612d293d44a84b" Mar 18 08:46:40.007430 master-0 kubenswrapper[4031]: I0318 08:46:40.007361 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"5a3bd52bc46563d9e0f440951b976daa40dee6ea05c0ee56171ddc976c094e95"} Mar 18 08:46:40.007613 master-0 kubenswrapper[4031]: I0318 08:46:40.007512 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:40.008728 master-0 kubenswrapper[4031]: I0318 08:46:40.008670 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:40.008815 master-0 kubenswrapper[4031]: I0318 08:46:40.008739 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:40.008815 master-0 kubenswrapper[4031]: I0318 08:46:40.008763 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:40.014542 master-0 kubenswrapper[4031]: E0318 08:46:40.014373 4031 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de32a848eaf96 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:40.005918614 +0000 UTC m=+14.935443654,LastTimestamp:2026-03-18 08:46:40.005918614 +0000 UTC m=+14.935443654,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:40.301278 master-0 kubenswrapper[4031]: E0318 08:46:40.301148 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189de329717c1796\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de329717c1796 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:35.390965654 +0000 UTC m=+10.320490684,LastTimestamp:2026-03-18 08:46:40.294311806 +0000 UTC m=+15.223836846,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:40.311866 master-0 kubenswrapper[4031]: E0318 08:46:40.311736 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189de32972479a71\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de32972479a71 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:35.404302961 +0000 UTC m=+10.333828001,LastTimestamp:2026-03-18 08:46:40.306648996 +0000 UTC m=+15.236174016,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:40.585954 master-0 kubenswrapper[4031]: I0318 08:46:40.585784 4031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:40.664999 master-0 kubenswrapper[4031]: I0318 08:46:40.664906 4031 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 18 08:46:40.696482 master-0 kubenswrapper[4031]: I0318 08:46:40.696428 4031 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 18 08:46:40.712396 master-0 kubenswrapper[4031]: I0318 08:46:40.712310 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:41.012736 master-0 kubenswrapper[4031]: I0318 08:46:41.012645 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"f277f2362065cc73017df28062682dd2e7050aa19629168ac1d2fca1e8c0c640"} Mar 18 08:46:41.012736 master-0 kubenswrapper[4031]: I0318 08:46:41.012725 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:41.013044 master-0 kubenswrapper[4031]: I0318 08:46:41.012746 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:41.014057 master-0 kubenswrapper[4031]: I0318 08:46:41.014003 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:41.014057 master-0 kubenswrapper[4031]: I0318 08:46:41.014042 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:41.014057 master-0 kubenswrapper[4031]: I0318 08:46:41.014058 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:41.014758 master-0 kubenswrapper[4031]: I0318 08:46:41.014697 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:41.015519 master-0 kubenswrapper[4031]: I0318 08:46:41.014795 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:41.015519 master-0 kubenswrapper[4031]: I0318 08:46:41.015157 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:41.711423 master-0 kubenswrapper[4031]: I0318 08:46:41.711336 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:42.015477 master-0 kubenswrapper[4031]: I0318 08:46:42.015335 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:42.016749 master-0 kubenswrapper[4031]: I0318 08:46:42.016694 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:42.016749 master-0 kubenswrapper[4031]: I0318 08:46:42.016742 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:42.016749 master-0 kubenswrapper[4031]: I0318 08:46:42.016757 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:42.712044 master-0 kubenswrapper[4031]: I0318 08:46:42.711976 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:42.892171 master-0 kubenswrapper[4031]: I0318 08:46:42.892074 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:42.893515 master-0 kubenswrapper[4031]: I0318 08:46:42.893467 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:42.893515 master-0 kubenswrapper[4031]: I0318 08:46:42.893508 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:42.893515 master-0 kubenswrapper[4031]: I0318 08:46:42.893520 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:42.893951 master-0 kubenswrapper[4031]: I0318 08:46:42.893918 4031 scope.go:117] "RemoveContainer" containerID="0ce23a43327bf85344d980658053cf1798050df895d3a5f0357e5ef05399959b" Mar 18 08:46:42.908295 master-0 kubenswrapper[4031]: E0318 08:46:42.908060 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189de327efb37ac3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de327efb37ac3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:28.918590147 +0000 UTC m=+3.848115187,LastTimestamp:2026-03-18 08:46:42.897860075 +0000 UTC m=+17.827385125,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:42.945279 master-0 kubenswrapper[4031]: W0318 08:46:42.945226 4031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 18 08:46:42.945517 master-0 kubenswrapper[4031]: E0318 08:46:42.945288 4031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 18 08:46:43.025330 master-0 kubenswrapper[4031]: W0318 08:46:43.025165 4031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 18 08:46:43.025330 master-0 kubenswrapper[4031]: E0318 08:46:43.025215 4031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 18 08:46:43.127239 master-0 kubenswrapper[4031]: E0318 08:46:43.127123 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189de327fe4067d4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de327fe4067d4 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:29.1627069 +0000 UTC m=+4.092231910,LastTimestamp:2026-03-18 08:46:43.121823977 +0000 UTC m=+18.051348997,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:43.140115 master-0 kubenswrapper[4031]: E0318 08:46:43.139989 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189de327fed3eae8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de327fed3eae8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:29.172374248 +0000 UTC m=+4.101899258,LastTimestamp:2026-03-18 08:46:43.133806508 +0000 UTC m=+18.063331528,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:43.178190 master-0 kubenswrapper[4031]: I0318 08:46:43.178151 4031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:43.178363 master-0 kubenswrapper[4031]: I0318 08:46:43.178317 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:43.179729 master-0 kubenswrapper[4031]: I0318 08:46:43.179684 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:43.179729 master-0 kubenswrapper[4031]: I0318 08:46:43.179712 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:43.179729 master-0 kubenswrapper[4031]: I0318 08:46:43.179725 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:43.185365 master-0 kubenswrapper[4031]: I0318 08:46:43.185332 4031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:43.263323 master-0 kubenswrapper[4031]: I0318 08:46:43.263229 4031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:43.267730 master-0 kubenswrapper[4031]: I0318 08:46:43.267645 4031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:46:43.492853 master-0 kubenswrapper[4031]: W0318 08:46:43.492801 4031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:43.493113 master-0 kubenswrapper[4031]: E0318 08:46:43.492901 4031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 18 08:46:43.710164 master-0 kubenswrapper[4031]: I0318 08:46:43.710115 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:44.021982 master-0 kubenswrapper[4031]: I0318 08:46:44.021848 4031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 08:46:44.022700 master-0 kubenswrapper[4031]: I0318 08:46:44.022595 4031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 18 08:46:44.023227 master-0 kubenswrapper[4031]: I0318 08:46:44.023174 4031 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="99b2a12b7f88eda209977be842c5d486304d7932fa91b2448c0ff3f2bd17f526" exitCode=1 Mar 18 08:46:44.023507 master-0 kubenswrapper[4031]: I0318 08:46:44.023330 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"99b2a12b7f88eda209977be842c5d486304d7932fa91b2448c0ff3f2bd17f526"} Mar 18 08:46:44.023507 master-0 kubenswrapper[4031]: I0318 08:46:44.023357 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:44.023507 master-0 kubenswrapper[4031]: I0318 08:46:44.023408 4031 scope.go:117] "RemoveContainer" containerID="0ce23a43327bf85344d980658053cf1798050df895d3a5f0357e5ef05399959b" Mar 18 08:46:44.023711 master-0 kubenswrapper[4031]: I0318 08:46:44.023534 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:44.024868 master-0 kubenswrapper[4031]: I0318 08:46:44.024818 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:44.024868 master-0 kubenswrapper[4031]: I0318 08:46:44.024850 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:44.026146 master-0 kubenswrapper[4031]: I0318 08:46:44.024889 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:44.026146 master-0 kubenswrapper[4031]: I0318 08:46:44.024914 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:44.026146 master-0 kubenswrapper[4031]: I0318 08:46:44.024890 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:44.026146 master-0 kubenswrapper[4031]: I0318 08:46:44.025026 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:44.026146 master-0 kubenswrapper[4031]: I0318 08:46:44.025389 4031 scope.go:117] "RemoveContainer" containerID="99b2a12b7f88eda209977be842c5d486304d7932fa91b2448c0ff3f2bd17f526" Mar 18 08:46:44.026146 master-0 kubenswrapper[4031]: E0318 08:46:44.025665 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 08:46:44.038014 master-0 kubenswrapper[4031]: E0318 08:46:44.037836 4031 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189de32867671223\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189de32867671223 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:46:30.926848547 +0000 UTC m=+5.856373557,LastTimestamp:2026-03-18 08:46:44.025626149 +0000 UTC m=+18.955151199,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:46:44.711653 master-0 kubenswrapper[4031]: I0318 08:46:44.711288 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:45.029037 master-0 kubenswrapper[4031]: I0318 08:46:45.028934 4031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 08:46:45.030011 master-0 kubenswrapper[4031]: I0318 08:46:45.029844 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:45.031158 master-0 kubenswrapper[4031]: I0318 08:46:45.031094 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:45.031158 master-0 kubenswrapper[4031]: I0318 08:46:45.031138 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:45.031158 master-0 kubenswrapper[4031]: I0318 08:46:45.031157 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:45.333191 master-0 kubenswrapper[4031]: E0318 08:46:45.332885 4031 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 18 08:46:45.358481 master-0 kubenswrapper[4031]: I0318 08:46:45.358374 4031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:45.360942 master-0 kubenswrapper[4031]: I0318 08:46:45.358611 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:45.360942 master-0 kubenswrapper[4031]: I0318 08:46:45.359755 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:45.360942 master-0 kubenswrapper[4031]: I0318 08:46:45.359787 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:45.360942 master-0 kubenswrapper[4031]: I0318 08:46:45.359802 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:45.364001 master-0 kubenswrapper[4031]: I0318 08:46:45.363956 4031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:45.545563 master-0 kubenswrapper[4031]: I0318 08:46:45.545488 4031 csr.go:261] certificate signing request csr-lv58f is approved, waiting to be issued Mar 18 08:46:45.648975 master-0 kubenswrapper[4031]: I0318 08:46:45.648742 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:45.650355 master-0 kubenswrapper[4031]: I0318 08:46:45.650193 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:45.650355 master-0 kubenswrapper[4031]: I0318 08:46:45.650296 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:45.650888 master-0 kubenswrapper[4031]: I0318 08:46:45.650748 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:45.651276 master-0 kubenswrapper[4031]: I0318 08:46:45.651182 4031 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 08:46:45.660044 master-0 kubenswrapper[4031]: E0318 08:46:45.659978 4031 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 18 08:46:45.711372 master-0 kubenswrapper[4031]: I0318 08:46:45.711327 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:45.903765 master-0 kubenswrapper[4031]: E0318 08:46:45.903612 4031 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 08:46:45.953832 master-0 kubenswrapper[4031]: W0318 08:46:45.953785 4031 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 18 08:46:45.953997 master-0 kubenswrapper[4031]: E0318 08:46:45.953842 4031 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 18 08:46:46.032177 master-0 kubenswrapper[4031]: I0318 08:46:46.032141 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:46.032974 master-0 kubenswrapper[4031]: I0318 08:46:46.032280 4031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:46.033677 master-0 kubenswrapper[4031]: I0318 08:46:46.033643 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:46.033833 master-0 kubenswrapper[4031]: I0318 08:46:46.033699 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:46.033833 master-0 kubenswrapper[4031]: I0318 08:46:46.033723 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:46.521165 master-0 kubenswrapper[4031]: I0318 08:46:46.521028 4031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:46.528044 master-0 kubenswrapper[4031]: I0318 08:46:46.527959 4031 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:46.710886 master-0 kubenswrapper[4031]: I0318 08:46:46.710834 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:47.034603 master-0 kubenswrapper[4031]: I0318 08:46:47.034496 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:47.035663 master-0 kubenswrapper[4031]: I0318 08:46:47.035628 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:47.035779 master-0 kubenswrapper[4031]: I0318 08:46:47.035691 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:47.035779 master-0 kubenswrapper[4031]: I0318 08:46:47.035719 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:47.710360 master-0 kubenswrapper[4031]: I0318 08:46:47.710317 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:48.037237 master-0 kubenswrapper[4031]: I0318 08:46:48.037079 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:48.038070 master-0 kubenswrapper[4031]: I0318 08:46:48.038018 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:48.038070 master-0 kubenswrapper[4031]: I0318 08:46:48.038065 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:48.038070 master-0 kubenswrapper[4031]: I0318 08:46:48.038084 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:48.708158 master-0 kubenswrapper[4031]: I0318 08:46:48.708084 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:49.710200 master-0 kubenswrapper[4031]: I0318 08:46:49.710048 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:50.591898 master-0 kubenswrapper[4031]: I0318 08:46:50.591769 4031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:50.592167 master-0 kubenswrapper[4031]: I0318 08:46:50.591930 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:50.596992 master-0 kubenswrapper[4031]: I0318 08:46:50.596876 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:50.596992 master-0 kubenswrapper[4031]: I0318 08:46:50.596931 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:50.596992 master-0 kubenswrapper[4031]: I0318 08:46:50.596979 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:50.598003 master-0 kubenswrapper[4031]: I0318 08:46:50.597944 4031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:46:50.711388 master-0 kubenswrapper[4031]: I0318 08:46:50.711289 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:51.044948 master-0 kubenswrapper[4031]: I0318 08:46:51.044842 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:51.048439 master-0 kubenswrapper[4031]: I0318 08:46:51.046915 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:51.048439 master-0 kubenswrapper[4031]: I0318 08:46:51.046998 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:51.048439 master-0 kubenswrapper[4031]: I0318 08:46:51.047024 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:51.710371 master-0 kubenswrapper[4031]: I0318 08:46:51.710300 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:52.341163 master-0 kubenswrapper[4031]: E0318 08:46:52.341089 4031 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 18 08:46:52.660372 master-0 kubenswrapper[4031]: I0318 08:46:52.660192 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:52.661633 master-0 kubenswrapper[4031]: I0318 08:46:52.661543 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:52.661731 master-0 kubenswrapper[4031]: I0318 08:46:52.661648 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:52.661731 master-0 kubenswrapper[4031]: I0318 08:46:52.661674 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:52.661851 master-0 kubenswrapper[4031]: I0318 08:46:52.661740 4031 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 08:46:52.669099 master-0 kubenswrapper[4031]: E0318 08:46:52.669041 4031 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 18 08:46:52.709364 master-0 kubenswrapper[4031]: I0318 08:46:52.709312 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:53.710350 master-0 kubenswrapper[4031]: I0318 08:46:53.710262 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:54.710029 master-0 kubenswrapper[4031]: I0318 08:46:54.709976 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:55.709146 master-0 kubenswrapper[4031]: I0318 08:46:55.709085 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:55.903985 master-0 kubenswrapper[4031]: E0318 08:46:55.903882 4031 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 08:46:56.710366 master-0 kubenswrapper[4031]: I0318 08:46:56.710313 4031 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 18 08:46:56.835460 master-0 kubenswrapper[4031]: I0318 08:46:56.835416 4031 csr.go:257] certificate signing request csr-lv58f is issued Mar 18 08:46:57.127814 master-0 kubenswrapper[4031]: I0318 08:46:57.127650 4031 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 08:46:57.571846 master-0 kubenswrapper[4031]: I0318 08:46:57.571785 4031 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 18 08:46:57.572090 master-0 kubenswrapper[4031]: W0318 08:46:57.572060 4031 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 18 08:46:57.716632 master-0 kubenswrapper[4031]: I0318 08:46:57.716545 4031 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:57.732744 master-0 kubenswrapper[4031]: I0318 08:46:57.732693 4031 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:57.789201 master-0 kubenswrapper[4031]: I0318 08:46:57.789138 4031 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:57.837093 master-0 kubenswrapper[4031]: I0318 08:46:57.836873 4031 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-19 08:38:39 +0000 UTC, rotation deadline is 2026-03-19 02:22:32.520035948 +0000 UTC Mar 18 08:46:57.837093 master-0 kubenswrapper[4031]: I0318 08:46:57.836965 4031 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 17h35m34.683075908s for next certificate rotation Mar 18 08:46:58.048430 master-0 kubenswrapper[4031]: I0318 08:46:58.048328 4031 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:58.048430 master-0 kubenswrapper[4031]: E0318 08:46:58.048387 4031 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 18 08:46:58.069340 master-0 kubenswrapper[4031]: I0318 08:46:58.069255 4031 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:58.084275 master-0 kubenswrapper[4031]: I0318 08:46:58.084187 4031 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:58.141770 master-0 kubenswrapper[4031]: I0318 08:46:58.141626 4031 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:58.408918 master-0 kubenswrapper[4031]: I0318 08:46:58.408759 4031 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:58.408918 master-0 kubenswrapper[4031]: E0318 08:46:58.408809 4031 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 18 08:46:58.506710 master-0 kubenswrapper[4031]: I0318 08:46:58.506645 4031 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:58.522442 master-0 kubenswrapper[4031]: I0318 08:46:58.522380 4031 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:58.578182 master-0 kubenswrapper[4031]: I0318 08:46:58.578105 4031 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:58.854363 master-0 kubenswrapper[4031]: I0318 08:46:58.854284 4031 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:58.854363 master-0 kubenswrapper[4031]: E0318 08:46:58.854323 4031 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 18 08:46:59.346182 master-0 kubenswrapper[4031]: E0318 08:46:59.346128 4031 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Mar 18 08:46:59.450249 master-0 kubenswrapper[4031]: I0318 08:46:59.450207 4031 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:59.466172 master-0 kubenswrapper[4031]: I0318 08:46:59.466131 4031 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:59.524128 master-0 kubenswrapper[4031]: I0318 08:46:59.524084 4031 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 18 08:46:59.669677 master-0 kubenswrapper[4031]: I0318 08:46:59.669499 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:59.671121 master-0 kubenswrapper[4031]: I0318 08:46:59.671065 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:59.671121 master-0 kubenswrapper[4031]: I0318 08:46:59.671121 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:59.671326 master-0 kubenswrapper[4031]: I0318 08:46:59.671138 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:59.671326 master-0 kubenswrapper[4031]: I0318 08:46:59.671214 4031 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 08:46:59.684247 master-0 kubenswrapper[4031]: I0318 08:46:59.684157 4031 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 18 08:46:59.684247 master-0 kubenswrapper[4031]: E0318 08:46:59.684217 4031 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 18 08:46:59.698362 master-0 kubenswrapper[4031]: E0318 08:46:59.698269 4031 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:59.726178 master-0 kubenswrapper[4031]: I0318 08:46:59.726108 4031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 18 08:46:59.738492 master-0 kubenswrapper[4031]: I0318 08:46:59.738419 4031 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 18 08:46:59.799018 master-0 kubenswrapper[4031]: E0318 08:46:59.798928 4031 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:46:59.838660 master-0 kubenswrapper[4031]: I0318 08:46:59.838557 4031 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 08:46:59.892307 master-0 kubenswrapper[4031]: I0318 08:46:59.892195 4031 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:46:59.893706 master-0 kubenswrapper[4031]: I0318 08:46:59.893658 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:46:59.893797 master-0 kubenswrapper[4031]: I0318 08:46:59.893767 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:46:59.893797 master-0 kubenswrapper[4031]: I0318 08:46:59.893789 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:46:59.894337 master-0 kubenswrapper[4031]: I0318 08:46:59.894298 4031 scope.go:117] "RemoveContainer" containerID="99b2a12b7f88eda209977be842c5d486304d7932fa91b2448c0ff3f2bd17f526" Mar 18 08:46:59.894597 master-0 kubenswrapper[4031]: E0318 08:46:59.894526 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 18 08:46:59.900099 master-0 kubenswrapper[4031]: E0318 08:46:59.900030 4031 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:00.000894 master-0 kubenswrapper[4031]: E0318 08:47:00.000828 4031 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:00.102013 master-0 kubenswrapper[4031]: E0318 08:47:00.101913 4031 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:00.202692 master-0 kubenswrapper[4031]: E0318 08:47:00.202553 4031 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:00.303697 master-0 kubenswrapper[4031]: E0318 08:47:00.303502 4031 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:00.404617 master-0 kubenswrapper[4031]: E0318 08:47:00.404472 4031 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:00.505668 master-0 kubenswrapper[4031]: E0318 08:47:00.505545 4031 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:00.606072 master-0 kubenswrapper[4031]: E0318 08:47:00.605920 4031 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:00.706248 master-0 kubenswrapper[4031]: E0318 08:47:00.706133 4031 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:00.806379 master-0 kubenswrapper[4031]: E0318 08:47:00.806293 4031 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:00.907717 master-0 kubenswrapper[4031]: E0318 08:47:00.907522 4031 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:01.007823 master-0 kubenswrapper[4031]: E0318 08:47:01.007723 4031 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:01.108608 master-0 kubenswrapper[4031]: E0318 08:47:01.108516 4031 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:01.208857 master-0 kubenswrapper[4031]: E0318 08:47:01.208780 4031 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:01.309734 master-0 kubenswrapper[4031]: E0318 08:47:01.309631 4031 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:47:01.311805 master-0 kubenswrapper[4031]: I0318 08:47:01.311762 4031 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 08:47:01.706879 master-0 kubenswrapper[4031]: I0318 08:47:01.706780 4031 apiserver.go:52] "Watching apiserver" Mar 18 08:47:01.712762 master-0 kubenswrapper[4031]: I0318 08:47:01.712709 4031 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 08:47:01.712984 master-0 kubenswrapper[4031]: I0318 08:47:01.712949 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr","openshift-network-operator/network-operator-7bd846bfc4-6rtpx"] Mar 18 08:47:01.713414 master-0 kubenswrapper[4031]: I0318 08:47:01.713370 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:47:01.713548 master-0 kubenswrapper[4031]: I0318 08:47:01.713489 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 08:47:01.719559 master-0 kubenswrapper[4031]: I0318 08:47:01.717735 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 08:47:01.719559 master-0 kubenswrapper[4031]: I0318 08:47:01.717818 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 08:47:01.719559 master-0 kubenswrapper[4031]: I0318 08:47:01.717826 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 08:47:01.719559 master-0 kubenswrapper[4031]: I0318 08:47:01.719207 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 08:47:01.719559 master-0 kubenswrapper[4031]: I0318 08:47:01.719224 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 08:47:01.719559 master-0 kubenswrapper[4031]: I0318 08:47:01.719222 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 08:47:01.807545 master-0 kubenswrapper[4031]: I0318 08:47:01.807488 4031 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 18 08:47:01.861949 master-0 kubenswrapper[4031]: I0318 08:47:01.860766 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85d361a2-3f83-4857-b96e-3e98fcf33463-service-ca\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:47:01.861949 master-0 kubenswrapper[4031]: I0318 08:47:01.860844 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-metrics-tls\") pod \"network-operator-7bd846bfc4-6rtpx\" (UID: \"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b\") " pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 08:47:01.861949 master-0 kubenswrapper[4031]: I0318 08:47:01.860880 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85d361a2-3f83-4857-b96e-3e98fcf33463-kube-api-access\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:47:01.861949 master-0 kubenswrapper[4031]: I0318 08:47:01.860916 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-host-etc-kube\") pod \"network-operator-7bd846bfc4-6rtpx\" (UID: \"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b\") " pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 08:47:01.861949 master-0 kubenswrapper[4031]: I0318 08:47:01.861015 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85d361a2-3f83-4857-b96e-3e98fcf33463-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:47:01.861949 master-0 kubenswrapper[4031]: I0318 08:47:01.861100 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxgx6\" (UniqueName: \"kubernetes.io/projected/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-kube-api-access-wxgx6\") pod \"network-operator-7bd846bfc4-6rtpx\" (UID: \"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b\") " pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 08:47:01.861949 master-0 kubenswrapper[4031]: I0318 08:47:01.861136 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85d361a2-3f83-4857-b96e-3e98fcf33463-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:47:01.861949 master-0 kubenswrapper[4031]: I0318 08:47:01.861180 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:47:01.962622 master-0 kubenswrapper[4031]: I0318 08:47:01.962422 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85d361a2-3f83-4857-b96e-3e98fcf33463-service-ca\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:47:01.962622 master-0 kubenswrapper[4031]: I0318 08:47:01.962496 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-metrics-tls\") pod \"network-operator-7bd846bfc4-6rtpx\" (UID: \"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b\") " pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 08:47:01.962622 master-0 kubenswrapper[4031]: I0318 08:47:01.962532 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-host-etc-kube\") pod \"network-operator-7bd846bfc4-6rtpx\" (UID: \"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b\") " pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 08:47:01.962622 master-0 kubenswrapper[4031]: I0318 08:47:01.962562 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85d361a2-3f83-4857-b96e-3e98fcf33463-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:47:01.963608 master-0 kubenswrapper[4031]: I0318 08:47:01.962642 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85d361a2-3f83-4857-b96e-3e98fcf33463-kube-api-access\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:47:01.963608 master-0 kubenswrapper[4031]: I0318 08:47:01.962693 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85d361a2-3f83-4857-b96e-3e98fcf33463-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:47:01.963608 master-0 kubenswrapper[4031]: I0318 08:47:01.962734 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:47:01.963608 master-0 kubenswrapper[4031]: I0318 08:47:01.962766 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxgx6\" (UniqueName: \"kubernetes.io/projected/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-kube-api-access-wxgx6\") pod \"network-operator-7bd846bfc4-6rtpx\" (UID: \"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b\") " pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 08:47:01.963984 master-0 kubenswrapper[4031]: I0318 08:47:01.963888 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85d361a2-3f83-4857-b96e-3e98fcf33463-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:47:01.964072 master-0 kubenswrapper[4031]: I0318 08:47:01.963898 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85d361a2-3f83-4857-b96e-3e98fcf33463-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:47:01.964175 master-0 kubenswrapper[4031]: I0318 08:47:01.964132 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-host-etc-kube\") pod \"network-operator-7bd846bfc4-6rtpx\" (UID: \"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b\") " pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 08:47:01.965298 master-0 kubenswrapper[4031]: I0318 08:47:01.964514 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85d361a2-3f83-4857-b96e-3e98fcf33463-service-ca\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:47:01.965298 master-0 kubenswrapper[4031]: E0318 08:47:01.964644 4031 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:01.965298 master-0 kubenswrapper[4031]: E0318 08:47:01.964806 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert podName:85d361a2-3f83-4857-b96e-3e98fcf33463 nodeName:}" failed. No retries permitted until 2026-03-18 08:47:02.46476423 +0000 UTC m=+37.394289290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert") pod "cluster-version-operator-56d8475767-t9zrr" (UID: "85d361a2-3f83-4857-b96e-3e98fcf33463") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:01.966688 master-0 kubenswrapper[4031]: I0318 08:47:01.965664 4031 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 18 08:47:01.985639 master-0 kubenswrapper[4031]: I0318 08:47:01.984481 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-metrics-tls\") pod \"network-operator-7bd846bfc4-6rtpx\" (UID: \"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b\") " pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 08:47:01.989134 master-0 kubenswrapper[4031]: I0318 08:47:01.989064 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxgx6\" (UniqueName: \"kubernetes.io/projected/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-kube-api-access-wxgx6\") pod \"network-operator-7bd846bfc4-6rtpx\" (UID: \"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b\") " pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 08:47:01.989881 master-0 kubenswrapper[4031]: I0318 08:47:01.989822 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85d361a2-3f83-4857-b96e-3e98fcf33463-kube-api-access\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:47:02.069793 master-0 kubenswrapper[4031]: I0318 08:47:02.069692 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 08:47:02.089959 master-0 kubenswrapper[4031]: W0318 08:47:02.089879 4031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b779ce3_07c4_45ca_b1ca_750c95ed3d0b.slice/crio-2078b34d8e519a78ac9e8ea0c87c5a7d54f6ce3303c5c0ec38a03f5d0d12a9d1 WatchSource:0}: Error finding container 2078b34d8e519a78ac9e8ea0c87c5a7d54f6ce3303c5c0ec38a03f5d0d12a9d1: Status 404 returned error can't find the container with id 2078b34d8e519a78ac9e8ea0c87c5a7d54f6ce3303c5c0ec38a03f5d0d12a9d1 Mar 18 08:47:02.467330 master-0 kubenswrapper[4031]: I0318 08:47:02.467230 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:47:02.467553 master-0 kubenswrapper[4031]: E0318 08:47:02.467481 4031 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:02.467698 master-0 kubenswrapper[4031]: E0318 08:47:02.467652 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert podName:85d361a2-3f83-4857-b96e-3e98fcf33463 nodeName:}" failed. No retries permitted until 2026-03-18 08:47:03.46760753 +0000 UTC m=+38.397132620 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert") pod "cluster-version-operator-56d8475767-t9zrr" (UID: "85d361a2-3f83-4857-b96e-3e98fcf33463") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:02.530499 master-0 kubenswrapper[4031]: I0318 08:47:02.530414 4031 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 08:47:03.074168 master-0 kubenswrapper[4031]: I0318 08:47:03.074103 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" event={"ID":"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b","Type":"ContainerStarted","Data":"2078b34d8e519a78ac9e8ea0c87c5a7d54f6ce3303c5c0ec38a03f5d0d12a9d1"} Mar 18 08:47:03.118002 master-0 kubenswrapper[4031]: I0318 08:47:03.117911 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-tjfg6"] Mar 18 08:47:03.118362 master-0 kubenswrapper[4031]: I0318 08:47:03.118316 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-tjfg6" Mar 18 08:47:03.120245 master-0 kubenswrapper[4031]: I0318 08:47:03.120210 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Mar 18 08:47:03.120245 master-0 kubenswrapper[4031]: I0318 08:47:03.120217 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Mar 18 08:47:03.120428 master-0 kubenswrapper[4031]: I0318 08:47:03.120349 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Mar 18 08:47:03.120920 master-0 kubenswrapper[4031]: I0318 08:47:03.120777 4031 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Mar 18 08:47:03.272949 master-0 kubenswrapper[4031]: I0318 08:47:03.272898 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0c9de07b-1ef1-4228-b310-1007d999dc7b-host-resolv-conf\") pod \"assisted-installer-controller-tjfg6\" (UID: \"0c9de07b-1ef1-4228-b310-1007d999dc7b\") " pod="assisted-installer/assisted-installer-controller-tjfg6" Mar 18 08:47:03.273152 master-0 kubenswrapper[4031]: I0318 08:47:03.272963 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/0c9de07b-1ef1-4228-b310-1007d999dc7b-sno-bootstrap-files\") pod \"assisted-installer-controller-tjfg6\" (UID: \"0c9de07b-1ef1-4228-b310-1007d999dc7b\") " pod="assisted-installer/assisted-installer-controller-tjfg6" Mar 18 08:47:03.273152 master-0 kubenswrapper[4031]: I0318 08:47:03.273003 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xpr5\" (UniqueName: \"kubernetes.io/projected/0c9de07b-1ef1-4228-b310-1007d999dc7b-kube-api-access-8xpr5\") pod \"assisted-installer-controller-tjfg6\" (UID: \"0c9de07b-1ef1-4228-b310-1007d999dc7b\") " pod="assisted-installer/assisted-installer-controller-tjfg6" Mar 18 08:47:03.273152 master-0 kubenswrapper[4031]: I0318 08:47:03.273043 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/0c9de07b-1ef1-4228-b310-1007d999dc7b-host-ca-bundle\") pod \"assisted-installer-controller-tjfg6\" (UID: \"0c9de07b-1ef1-4228-b310-1007d999dc7b\") " pod="assisted-installer/assisted-installer-controller-tjfg6" Mar 18 08:47:03.273152 master-0 kubenswrapper[4031]: I0318 08:47:03.273104 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0c9de07b-1ef1-4228-b310-1007d999dc7b-host-var-run-resolv-conf\") pod \"assisted-installer-controller-tjfg6\" (UID: \"0c9de07b-1ef1-4228-b310-1007d999dc7b\") " pod="assisted-installer/assisted-installer-controller-tjfg6" Mar 18 08:47:03.374517 master-0 kubenswrapper[4031]: I0318 08:47:03.374351 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0c9de07b-1ef1-4228-b310-1007d999dc7b-host-var-run-resolv-conf\") pod \"assisted-installer-controller-tjfg6\" (UID: \"0c9de07b-1ef1-4228-b310-1007d999dc7b\") " pod="assisted-installer/assisted-installer-controller-tjfg6" Mar 18 08:47:03.374517 master-0 kubenswrapper[4031]: I0318 08:47:03.374411 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0c9de07b-1ef1-4228-b310-1007d999dc7b-host-resolv-conf\") pod \"assisted-installer-controller-tjfg6\" (UID: \"0c9de07b-1ef1-4228-b310-1007d999dc7b\") " pod="assisted-installer/assisted-installer-controller-tjfg6" Mar 18 08:47:03.374517 master-0 kubenswrapper[4031]: I0318 08:47:03.374447 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/0c9de07b-1ef1-4228-b310-1007d999dc7b-sno-bootstrap-files\") pod \"assisted-installer-controller-tjfg6\" (UID: \"0c9de07b-1ef1-4228-b310-1007d999dc7b\") " pod="assisted-installer/assisted-installer-controller-tjfg6" Mar 18 08:47:03.374905 master-0 kubenswrapper[4031]: I0318 08:47:03.374559 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0c9de07b-1ef1-4228-b310-1007d999dc7b-host-var-run-resolv-conf\") pod \"assisted-installer-controller-tjfg6\" (UID: \"0c9de07b-1ef1-4228-b310-1007d999dc7b\") " pod="assisted-installer/assisted-installer-controller-tjfg6" Mar 18 08:47:03.374905 master-0 kubenswrapper[4031]: I0318 08:47:03.374645 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xpr5\" (UniqueName: \"kubernetes.io/projected/0c9de07b-1ef1-4228-b310-1007d999dc7b-kube-api-access-8xpr5\") pod \"assisted-installer-controller-tjfg6\" (UID: \"0c9de07b-1ef1-4228-b310-1007d999dc7b\") " pod="assisted-installer/assisted-installer-controller-tjfg6" Mar 18 08:47:03.374905 master-0 kubenswrapper[4031]: I0318 08:47:03.374725 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/0c9de07b-1ef1-4228-b310-1007d999dc7b-sno-bootstrap-files\") pod \"assisted-installer-controller-tjfg6\" (UID: \"0c9de07b-1ef1-4228-b310-1007d999dc7b\") " pod="assisted-installer/assisted-installer-controller-tjfg6" Mar 18 08:47:03.374905 master-0 kubenswrapper[4031]: I0318 08:47:03.374777 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0c9de07b-1ef1-4228-b310-1007d999dc7b-host-resolv-conf\") pod \"assisted-installer-controller-tjfg6\" (UID: \"0c9de07b-1ef1-4228-b310-1007d999dc7b\") " pod="assisted-installer/assisted-installer-controller-tjfg6" Mar 18 08:47:03.374905 master-0 kubenswrapper[4031]: I0318 08:47:03.374802 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/0c9de07b-1ef1-4228-b310-1007d999dc7b-host-ca-bundle\") pod \"assisted-installer-controller-tjfg6\" (UID: \"0c9de07b-1ef1-4228-b310-1007d999dc7b\") " pod="assisted-installer/assisted-installer-controller-tjfg6" Mar 18 08:47:03.374905 master-0 kubenswrapper[4031]: I0318 08:47:03.374861 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/0c9de07b-1ef1-4228-b310-1007d999dc7b-host-ca-bundle\") pod \"assisted-installer-controller-tjfg6\" (UID: \"0c9de07b-1ef1-4228-b310-1007d999dc7b\") " pod="assisted-installer/assisted-installer-controller-tjfg6" Mar 18 08:47:03.403734 master-0 kubenswrapper[4031]: I0318 08:47:03.403686 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xpr5\" (UniqueName: \"kubernetes.io/projected/0c9de07b-1ef1-4228-b310-1007d999dc7b-kube-api-access-8xpr5\") pod \"assisted-installer-controller-tjfg6\" (UID: \"0c9de07b-1ef1-4228-b310-1007d999dc7b\") " pod="assisted-installer/assisted-installer-controller-tjfg6" Mar 18 08:47:03.446188 master-0 kubenswrapper[4031]: I0318 08:47:03.446147 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-tjfg6" Mar 18 08:47:03.465362 master-0 kubenswrapper[4031]: W0318 08:47:03.465312 4031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c9de07b_1ef1_4228_b310_1007d999dc7b.slice/crio-c615e349deeb331df9e16cf8bf4c467f24a3403e12d87a9f1138c30e06a4c9d2 WatchSource:0}: Error finding container c615e349deeb331df9e16cf8bf4c467f24a3403e12d87a9f1138c30e06a4c9d2: Status 404 returned error can't find the container with id c615e349deeb331df9e16cf8bf4c467f24a3403e12d87a9f1138c30e06a4c9d2 Mar 18 08:47:03.475122 master-0 kubenswrapper[4031]: I0318 08:47:03.475080 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:47:03.475283 master-0 kubenswrapper[4031]: E0318 08:47:03.475233 4031 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:03.475369 master-0 kubenswrapper[4031]: E0318 08:47:03.475311 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert podName:85d361a2-3f83-4857-b96e-3e98fcf33463 nodeName:}" failed. No retries permitted until 2026-03-18 08:47:05.475294784 +0000 UTC m=+40.404819804 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert") pod "cluster-version-operator-56d8475767-t9zrr" (UID: "85d361a2-3f83-4857-b96e-3e98fcf33463") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:04.078152 master-0 kubenswrapper[4031]: I0318 08:47:04.078087 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-tjfg6" event={"ID":"0c9de07b-1ef1-4228-b310-1007d999dc7b","Type":"ContainerStarted","Data":"c615e349deeb331df9e16cf8bf4c467f24a3403e12d87a9f1138c30e06a4c9d2"} Mar 18 08:47:05.490863 master-0 kubenswrapper[4031]: I0318 08:47:05.490814 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:47:05.491417 master-0 kubenswrapper[4031]: E0318 08:47:05.490939 4031 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:05.491417 master-0 kubenswrapper[4031]: E0318 08:47:05.490994 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert podName:85d361a2-3f83-4857-b96e-3e98fcf33463 nodeName:}" failed. No retries permitted until 2026-03-18 08:47:09.49097872 +0000 UTC m=+44.420503730 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert") pod "cluster-version-operator-56d8475767-t9zrr" (UID: "85d361a2-3f83-4857-b96e-3e98fcf33463") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:06.087667 master-0 kubenswrapper[4031]: I0318 08:47:06.087519 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" event={"ID":"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b","Type":"ContainerStarted","Data":"fd295b6b7843cd03ce43cecd7dcd871e030a3bf9af1473694567c5a5799d4c76"} Mar 18 08:47:06.098743 master-0 kubenswrapper[4031]: I0318 08:47:06.098653 4031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" podStartSLOduration=2.6544050820000002 podStartE2EDuration="6.098634497s" podCreationTimestamp="2026-03-18 08:47:00 +0000 UTC" firstStartedPulling="2026-03-18 08:47:02.092877202 +0000 UTC m=+37.022402252" lastFinishedPulling="2026-03-18 08:47:05.537106647 +0000 UTC m=+40.466631667" observedRunningTime="2026-03-18 08:47:06.098490463 +0000 UTC m=+41.028015483" watchObservedRunningTime="2026-03-18 08:47:06.098634497 +0000 UTC m=+41.028159537" Mar 18 08:47:06.468761 master-0 kubenswrapper[4031]: I0318 08:47:06.468674 4031 csr.go:261] certificate signing request csr-jrm2v is approved, waiting to be issued Mar 18 08:47:06.476383 master-0 kubenswrapper[4031]: I0318 08:47:06.476314 4031 csr.go:257] certificate signing request csr-jrm2v is issued Mar 18 08:47:07.477653 master-0 kubenswrapper[4031]: I0318 08:47:07.477609 4031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-19 08:38:39 +0000 UTC, rotation deadline is 2026-03-19 01:36:37.837124841 +0000 UTC Mar 18 08:47:07.477653 master-0 kubenswrapper[4031]: I0318 08:47:07.477639 4031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 16h49m30.359488298s for next certificate rotation Mar 18 08:47:08.135995 master-0 kubenswrapper[4031]: I0318 08:47:08.135941 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-hk5dx"] Mar 18 08:47:08.137323 master-0 kubenswrapper[4031]: I0318 08:47:08.137102 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-hk5dx" Mar 18 08:47:08.211666 master-0 kubenswrapper[4031]: I0318 08:47:08.211600 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kn7l\" (UniqueName: \"kubernetes.io/projected/3eeb8b56-2c99-4cac-8b32-dd51c94e53ba-kube-api-access-5kn7l\") pod \"mtu-prober-hk5dx\" (UID: \"3eeb8b56-2c99-4cac-8b32-dd51c94e53ba\") " pod="openshift-network-operator/mtu-prober-hk5dx" Mar 18 08:47:08.312484 master-0 kubenswrapper[4031]: I0318 08:47:08.312396 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kn7l\" (UniqueName: \"kubernetes.io/projected/3eeb8b56-2c99-4cac-8b32-dd51c94e53ba-kube-api-access-5kn7l\") pod \"mtu-prober-hk5dx\" (UID: \"3eeb8b56-2c99-4cac-8b32-dd51c94e53ba\") " pod="openshift-network-operator/mtu-prober-hk5dx" Mar 18 08:47:08.340069 master-0 kubenswrapper[4031]: I0318 08:47:08.339991 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kn7l\" (UniqueName: \"kubernetes.io/projected/3eeb8b56-2c99-4cac-8b32-dd51c94e53ba-kube-api-access-5kn7l\") pod \"mtu-prober-hk5dx\" (UID: \"3eeb8b56-2c99-4cac-8b32-dd51c94e53ba\") " pod="openshift-network-operator/mtu-prober-hk5dx" Mar 18 08:47:08.454842 master-0 kubenswrapper[4031]: I0318 08:47:08.454782 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-hk5dx" Mar 18 08:47:08.469167 master-0 kubenswrapper[4031]: W0318 08:47:08.469106 4031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3eeb8b56_2c99_4cac_8b32_dd51c94e53ba.slice/crio-c6f8da5e2e3cca080f8e7ee476951ce9423039dd275ca18645fe053e445bb1fd WatchSource:0}: Error finding container c6f8da5e2e3cca080f8e7ee476951ce9423039dd275ca18645fe053e445bb1fd: Status 404 returned error can't find the container with id c6f8da5e2e3cca080f8e7ee476951ce9423039dd275ca18645fe053e445bb1fd Mar 18 08:47:08.478638 master-0 kubenswrapper[4031]: I0318 08:47:08.478265 4031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-19 08:38:39 +0000 UTC, rotation deadline is 2026-03-19 05:05:12.304682483 +0000 UTC Mar 18 08:47:08.478638 master-0 kubenswrapper[4031]: I0318 08:47:08.478631 4031 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h18m3.826056877s for next certificate rotation Mar 18 08:47:09.094884 master-0 kubenswrapper[4031]: I0318 08:47:09.094796 4031 generic.go:334] "Generic (PLEG): container finished" podID="0c9de07b-1ef1-4228-b310-1007d999dc7b" containerID="19dda705eb005970ec7faa939c9f315d05d7277d2869c2b15c7b89d228425457" exitCode=0 Mar 18 08:47:09.095168 master-0 kubenswrapper[4031]: I0318 08:47:09.094926 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-tjfg6" event={"ID":"0c9de07b-1ef1-4228-b310-1007d999dc7b","Type":"ContainerDied","Data":"19dda705eb005970ec7faa939c9f315d05d7277d2869c2b15c7b89d228425457"} Mar 18 08:47:09.097169 master-0 kubenswrapper[4031]: I0318 08:47:09.097104 4031 generic.go:334] "Generic (PLEG): container finished" podID="3eeb8b56-2c99-4cac-8b32-dd51c94e53ba" containerID="6af9b3db51dc2800e23bac1d32175e8ad4a26ab1ee574f2d956ea30888e63922" exitCode=0 Mar 18 08:47:09.097169 master-0 kubenswrapper[4031]: I0318 08:47:09.097162 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-hk5dx" event={"ID":"3eeb8b56-2c99-4cac-8b32-dd51c94e53ba","Type":"ContainerDied","Data":"6af9b3db51dc2800e23bac1d32175e8ad4a26ab1ee574f2d956ea30888e63922"} Mar 18 08:47:09.097370 master-0 kubenswrapper[4031]: I0318 08:47:09.097193 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-hk5dx" event={"ID":"3eeb8b56-2c99-4cac-8b32-dd51c94e53ba","Type":"ContainerStarted","Data":"c6f8da5e2e3cca080f8e7ee476951ce9423039dd275ca18645fe053e445bb1fd"} Mar 18 08:47:09.521672 master-0 kubenswrapper[4031]: I0318 08:47:09.521590 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:47:09.522488 master-0 kubenswrapper[4031]: E0318 08:47:09.521779 4031 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:09.522488 master-0 kubenswrapper[4031]: E0318 08:47:09.521891 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert podName:85d361a2-3f83-4857-b96e-3e98fcf33463 nodeName:}" failed. No retries permitted until 2026-03-18 08:47:17.52185837 +0000 UTC m=+52.451383420 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert") pod "cluster-version-operator-56d8475767-t9zrr" (UID: "85d361a2-3f83-4857-b96e-3e98fcf33463") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:10.137835 master-0 kubenswrapper[4031]: I0318 08:47:10.137754 4031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-tjfg6" Mar 18 08:47:10.144244 master-0 kubenswrapper[4031]: I0318 08:47:10.144182 4031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-hk5dx" Mar 18 08:47:10.224799 master-0 kubenswrapper[4031]: I0318 08:47:10.224692 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/0c9de07b-1ef1-4228-b310-1007d999dc7b-host-ca-bundle\") pod \"0c9de07b-1ef1-4228-b310-1007d999dc7b\" (UID: \"0c9de07b-1ef1-4228-b310-1007d999dc7b\") " Mar 18 08:47:10.224799 master-0 kubenswrapper[4031]: I0318 08:47:10.224798 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/0c9de07b-1ef1-4228-b310-1007d999dc7b-sno-bootstrap-files\") pod \"0c9de07b-1ef1-4228-b310-1007d999dc7b\" (UID: \"0c9de07b-1ef1-4228-b310-1007d999dc7b\") " Mar 18 08:47:10.225119 master-0 kubenswrapper[4031]: I0318 08:47:10.224874 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xpr5\" (UniqueName: \"kubernetes.io/projected/0c9de07b-1ef1-4228-b310-1007d999dc7b-kube-api-access-8xpr5\") pod \"0c9de07b-1ef1-4228-b310-1007d999dc7b\" (UID: \"0c9de07b-1ef1-4228-b310-1007d999dc7b\") " Mar 18 08:47:10.225119 master-0 kubenswrapper[4031]: I0318 08:47:10.224915 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0c9de07b-1ef1-4228-b310-1007d999dc7b-host-var-run-resolv-conf\") pod \"0c9de07b-1ef1-4228-b310-1007d999dc7b\" (UID: \"0c9de07b-1ef1-4228-b310-1007d999dc7b\") " Mar 18 08:47:10.225119 master-0 kubenswrapper[4031]: I0318 08:47:10.224890 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c9de07b-1ef1-4228-b310-1007d999dc7b-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "0c9de07b-1ef1-4228-b310-1007d999dc7b" (UID: "0c9de07b-1ef1-4228-b310-1007d999dc7b"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:47:10.225119 master-0 kubenswrapper[4031]: I0318 08:47:10.224967 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kn7l\" (UniqueName: \"kubernetes.io/projected/3eeb8b56-2c99-4cac-8b32-dd51c94e53ba-kube-api-access-5kn7l\") pod \"3eeb8b56-2c99-4cac-8b32-dd51c94e53ba\" (UID: \"3eeb8b56-2c99-4cac-8b32-dd51c94e53ba\") " Mar 18 08:47:10.225119 master-0 kubenswrapper[4031]: I0318 08:47:10.225013 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0c9de07b-1ef1-4228-b310-1007d999dc7b-host-resolv-conf\") pod \"0c9de07b-1ef1-4228-b310-1007d999dc7b\" (UID: \"0c9de07b-1ef1-4228-b310-1007d999dc7b\") " Mar 18 08:47:10.225119 master-0 kubenswrapper[4031]: I0318 08:47:10.225014 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c9de07b-1ef1-4228-b310-1007d999dc7b-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "0c9de07b-1ef1-4228-b310-1007d999dc7b" (UID: "0c9de07b-1ef1-4228-b310-1007d999dc7b"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:47:10.225119 master-0 kubenswrapper[4031]: I0318 08:47:10.225079 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c9de07b-1ef1-4228-b310-1007d999dc7b-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "0c9de07b-1ef1-4228-b310-1007d999dc7b" (UID: "0c9de07b-1ef1-4228-b310-1007d999dc7b"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:47:10.225749 master-0 kubenswrapper[4031]: I0318 08:47:10.225171 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c9de07b-1ef1-4228-b310-1007d999dc7b-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "0c9de07b-1ef1-4228-b310-1007d999dc7b" (UID: "0c9de07b-1ef1-4228-b310-1007d999dc7b"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:47:10.225749 master-0 kubenswrapper[4031]: I0318 08:47:10.225205 4031 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/0c9de07b-1ef1-4228-b310-1007d999dc7b-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:10.225749 master-0 kubenswrapper[4031]: I0318 08:47:10.225231 4031 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0c9de07b-1ef1-4228-b310-1007d999dc7b-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:10.225749 master-0 kubenswrapper[4031]: I0318 08:47:10.225254 4031 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/0c9de07b-1ef1-4228-b310-1007d999dc7b-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:10.230037 master-0 kubenswrapper[4031]: I0318 08:47:10.229698 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eeb8b56-2c99-4cac-8b32-dd51c94e53ba-kube-api-access-5kn7l" (OuterVolumeSpecName: "kube-api-access-5kn7l") pod "3eeb8b56-2c99-4cac-8b32-dd51c94e53ba" (UID: "3eeb8b56-2c99-4cac-8b32-dd51c94e53ba"). InnerVolumeSpecName "kube-api-access-5kn7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:47:10.230037 master-0 kubenswrapper[4031]: I0318 08:47:10.229872 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c9de07b-1ef1-4228-b310-1007d999dc7b-kube-api-access-8xpr5" (OuterVolumeSpecName: "kube-api-access-8xpr5") pod "0c9de07b-1ef1-4228-b310-1007d999dc7b" (UID: "0c9de07b-1ef1-4228-b310-1007d999dc7b"). InnerVolumeSpecName "kube-api-access-8xpr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:47:10.326125 master-0 kubenswrapper[4031]: I0318 08:47:10.326006 4031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xpr5\" (UniqueName: \"kubernetes.io/projected/0c9de07b-1ef1-4228-b310-1007d999dc7b-kube-api-access-8xpr5\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:10.326125 master-0 kubenswrapper[4031]: I0318 08:47:10.326068 4031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kn7l\" (UniqueName: \"kubernetes.io/projected/3eeb8b56-2c99-4cac-8b32-dd51c94e53ba-kube-api-access-5kn7l\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:10.326125 master-0 kubenswrapper[4031]: I0318 08:47:10.326090 4031 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/0c9de07b-1ef1-4228-b310-1007d999dc7b-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:11.104166 master-0 kubenswrapper[4031]: I0318 08:47:11.104092 4031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-tjfg6" Mar 18 08:47:11.105349 master-0 kubenswrapper[4031]: I0318 08:47:11.104081 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-tjfg6" event={"ID":"0c9de07b-1ef1-4228-b310-1007d999dc7b","Type":"ContainerDied","Data":"c615e349deeb331df9e16cf8bf4c467f24a3403e12d87a9f1138c30e06a4c9d2"} Mar 18 08:47:11.105349 master-0 kubenswrapper[4031]: I0318 08:47:11.104318 4031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c615e349deeb331df9e16cf8bf4c467f24a3403e12d87a9f1138c30e06a4c9d2" Mar 18 08:47:11.107298 master-0 kubenswrapper[4031]: I0318 08:47:11.107235 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-hk5dx" event={"ID":"3eeb8b56-2c99-4cac-8b32-dd51c94e53ba","Type":"ContainerDied","Data":"c6f8da5e2e3cca080f8e7ee476951ce9423039dd275ca18645fe053e445bb1fd"} Mar 18 08:47:11.107298 master-0 kubenswrapper[4031]: I0318 08:47:11.107294 4031 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6f8da5e2e3cca080f8e7ee476951ce9423039dd275ca18645fe053e445bb1fd" Mar 18 08:47:11.107507 master-0 kubenswrapper[4031]: I0318 08:47:11.107355 4031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-hk5dx" Mar 18 08:47:13.133255 master-0 kubenswrapper[4031]: I0318 08:47:13.133149 4031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-hk5dx"] Mar 18 08:47:13.135523 master-0 kubenswrapper[4031]: I0318 08:47:13.135463 4031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-hk5dx"] Mar 18 08:47:13.899363 master-0 kubenswrapper[4031]: I0318 08:47:13.899179 4031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eeb8b56-2c99-4cac-8b32-dd51c94e53ba" path="/var/lib/kubelet/pods/3eeb8b56-2c99-4cac-8b32-dd51c94e53ba/volumes" Mar 18 08:47:14.910167 master-0 kubenswrapper[4031]: I0318 08:47:14.909283 4031 scope.go:117] "RemoveContainer" containerID="99b2a12b7f88eda209977be842c5d486304d7932fa91b2448c0ff3f2bd17f526" Mar 18 08:47:14.910167 master-0 kubenswrapper[4031]: I0318 08:47:14.909655 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 18 08:47:16.123943 master-0 kubenswrapper[4031]: I0318 08:47:16.123882 4031 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 08:47:16.125650 master-0 kubenswrapper[4031]: I0318 08:47:16.125552 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"e0b37287226cec590faa4200c15d2fef886c4879e12913c9f633d02f362fc880"} Mar 18 08:47:16.143787 master-0 kubenswrapper[4031]: I0318 08:47:16.143679 4031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=2.143658112 podStartE2EDuration="2.143658112s" podCreationTimestamp="2026-03-18 08:47:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:47:16.143622421 +0000 UTC m=+51.073147441" watchObservedRunningTime="2026-03-18 08:47:16.143658112 +0000 UTC m=+51.073183132" Mar 18 08:47:17.577719 master-0 kubenswrapper[4031]: I0318 08:47:17.577641 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:47:17.577719 master-0 kubenswrapper[4031]: E0318 08:47:17.577663 4031 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:17.578381 master-0 kubenswrapper[4031]: E0318 08:47:17.577817 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert podName:85d361a2-3f83-4857-b96e-3e98fcf33463 nodeName:}" failed. No retries permitted until 2026-03-18 08:47:33.577785847 +0000 UTC m=+68.507310897 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert") pod "cluster-version-operator-56d8475767-t9zrr" (UID: "85d361a2-3f83-4857-b96e-3e98fcf33463") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:18.016335 master-0 kubenswrapper[4031]: I0318 08:47:18.016292 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-h7vq8"] Mar 18 08:47:18.016687 master-0 kubenswrapper[4031]: E0318 08:47:18.016663 4031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c9de07b-1ef1-4228-b310-1007d999dc7b" containerName="assisted-installer-controller" Mar 18 08:47:18.016802 master-0 kubenswrapper[4031]: I0318 08:47:18.016786 4031 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c9de07b-1ef1-4228-b310-1007d999dc7b" containerName="assisted-installer-controller" Mar 18 08:47:18.016895 master-0 kubenswrapper[4031]: E0318 08:47:18.016879 4031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eeb8b56-2c99-4cac-8b32-dd51c94e53ba" containerName="prober" Mar 18 08:47:18.017009 master-0 kubenswrapper[4031]: I0318 08:47:18.016994 4031 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eeb8b56-2c99-4cac-8b32-dd51c94e53ba" containerName="prober" Mar 18 08:47:18.017122 master-0 kubenswrapper[4031]: I0318 08:47:18.017106 4031 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eeb8b56-2c99-4cac-8b32-dd51c94e53ba" containerName="prober" Mar 18 08:47:18.017218 master-0 kubenswrapper[4031]: I0318 08:47:18.017202 4031 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c9de07b-1ef1-4228-b310-1007d999dc7b" containerName="assisted-installer-controller" Mar 18 08:47:18.017768 master-0 kubenswrapper[4031]: I0318 08:47:18.017699 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.020425 master-0 kubenswrapper[4031]: I0318 08:47:18.020372 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 08:47:18.022282 master-0 kubenswrapper[4031]: I0318 08:47:18.022222 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 08:47:18.022389 master-0 kubenswrapper[4031]: I0318 08:47:18.022224 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 08:47:18.025155 master-0 kubenswrapper[4031]: I0318 08:47:18.025119 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 08:47:18.081685 master-0 kubenswrapper[4031]: I0318 08:47:18.081619 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-cni-bin\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.081685 master-0 kubenswrapper[4031]: I0318 08:47:18.081666 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-cni-multus\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.081934 master-0 kubenswrapper[4031]: I0318 08:47:18.081746 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-conf-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.081934 master-0 kubenswrapper[4031]: I0318 08:47:18.081795 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-etc-kubernetes\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.081934 master-0 kubenswrapper[4031]: I0318 08:47:18.081823 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-os-release\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.081934 master-0 kubenswrapper[4031]: I0318 08:47:18.081846 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-socket-dir-parent\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.081934 master-0 kubenswrapper[4031]: I0318 08:47:18.081866 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-daemon-config\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.081934 master-0 kubenswrapper[4031]: I0318 08:47:18.081884 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-netns\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.082151 master-0 kubenswrapper[4031]: I0318 08:47:18.081939 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-k8s-cni-cncf-io\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.082151 master-0 kubenswrapper[4031]: I0318 08:47:18.082019 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-hostroot\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.082151 master-0 kubenswrapper[4031]: I0318 08:47:18.082090 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5svd\" (UniqueName: \"kubernetes.io/projected/af1fbcf2-d4de-4015-89fc-2565e855a04d-kube-api-access-r5svd\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.082258 master-0 kubenswrapper[4031]: I0318 08:47:18.082155 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-system-cni-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.082258 master-0 kubenswrapper[4031]: I0318 08:47:18.082187 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-kubelet\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.082258 master-0 kubenswrapper[4031]: I0318 08:47:18.082220 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-cnibin\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.082258 master-0 kubenswrapper[4031]: I0318 08:47:18.082251 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-cni-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.082536 master-0 kubenswrapper[4031]: I0318 08:47:18.082280 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/af1fbcf2-d4de-4015-89fc-2565e855a04d-cni-binary-copy\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.082536 master-0 kubenswrapper[4031]: I0318 08:47:18.082311 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-multus-certs\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.183487 master-0 kubenswrapper[4031]: I0318 08:47:18.183368 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-system-cni-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.183487 master-0 kubenswrapper[4031]: I0318 08:47:18.183434 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-kubelet\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.183487 master-0 kubenswrapper[4031]: I0318 08:47:18.183465 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-cnibin\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.184068 master-0 kubenswrapper[4031]: I0318 08:47:18.183689 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-system-cni-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.184068 master-0 kubenswrapper[4031]: I0318 08:47:18.183838 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-cni-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.184068 master-0 kubenswrapper[4031]: I0318 08:47:18.183891 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-cnibin\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.184068 master-0 kubenswrapper[4031]: I0318 08:47:18.183904 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/af1fbcf2-d4de-4015-89fc-2565e855a04d-cni-binary-copy\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.184068 master-0 kubenswrapper[4031]: I0318 08:47:18.183940 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-multus-certs\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.184068 master-0 kubenswrapper[4031]: I0318 08:47:18.183958 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-kubelet\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.184068 master-0 kubenswrapper[4031]: I0318 08:47:18.184030 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-multus-certs\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.184456 master-0 kubenswrapper[4031]: I0318 08:47:18.184122 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-cni-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.184456 master-0 kubenswrapper[4031]: I0318 08:47:18.184184 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-cni-bin\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.184456 master-0 kubenswrapper[4031]: I0318 08:47:18.184216 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-cni-multus\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.184456 master-0 kubenswrapper[4031]: I0318 08:47:18.184248 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-conf-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.184456 master-0 kubenswrapper[4031]: I0318 08:47:18.184282 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-etc-kubernetes\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.184456 master-0 kubenswrapper[4031]: I0318 08:47:18.184321 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-os-release\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.184456 master-0 kubenswrapper[4031]: I0318 08:47:18.184352 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-socket-dir-parent\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.184456 master-0 kubenswrapper[4031]: I0318 08:47:18.184387 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-daemon-config\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.184456 master-0 kubenswrapper[4031]: I0318 08:47:18.184425 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-netns\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.184456 master-0 kubenswrapper[4031]: I0318 08:47:18.184457 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-k8s-cni-cncf-io\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.185062 master-0 kubenswrapper[4031]: I0318 08:47:18.184489 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-hostroot\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.185062 master-0 kubenswrapper[4031]: I0318 08:47:18.184519 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5svd\" (UniqueName: \"kubernetes.io/projected/af1fbcf2-d4de-4015-89fc-2565e855a04d-kube-api-access-r5svd\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.185062 master-0 kubenswrapper[4031]: I0318 08:47:18.184632 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/af1fbcf2-d4de-4015-89fc-2565e855a04d-cni-binary-copy\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.185062 master-0 kubenswrapper[4031]: I0318 08:47:18.184694 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-os-release\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.185062 master-0 kubenswrapper[4031]: I0318 08:47:18.184718 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-cni-bin\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.185062 master-0 kubenswrapper[4031]: I0318 08:47:18.184739 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-cni-multus\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.185062 master-0 kubenswrapper[4031]: I0318 08:47:18.184758 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-conf-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.185062 master-0 kubenswrapper[4031]: I0318 08:47:18.184777 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-etc-kubernetes\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.185062 master-0 kubenswrapper[4031]: I0318 08:47:18.184798 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-netns\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.185062 master-0 kubenswrapper[4031]: I0318 08:47:18.184826 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-socket-dir-parent\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.185062 master-0 kubenswrapper[4031]: I0318 08:47:18.184970 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-k8s-cni-cncf-io\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.185062 master-0 kubenswrapper[4031]: I0318 08:47:18.185031 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-hostroot\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.185908 master-0 kubenswrapper[4031]: I0318 08:47:18.185192 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-daemon-config\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.206514 master-0 kubenswrapper[4031]: I0318 08:47:18.206431 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5svd\" (UniqueName: \"kubernetes.io/projected/af1fbcf2-d4de-4015-89fc-2565e855a04d-kube-api-access-r5svd\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.212103 master-0 kubenswrapper[4031]: I0318 08:47:18.212028 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-68tmr"] Mar 18 08:47:18.213038 master-0 kubenswrapper[4031]: I0318 08:47:18.212979 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.215882 master-0 kubenswrapper[4031]: I0318 08:47:18.215833 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 08:47:18.217905 master-0 kubenswrapper[4031]: I0318 08:47:18.217788 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 08:47:18.285665 master-0 kubenswrapper[4031]: I0318 08:47:18.285523 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.285665 master-0 kubenswrapper[4031]: I0318 08:47:18.285557 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cnibin\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.285665 master-0 kubenswrapper[4031]: I0318 08:47:18.285591 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-tuning-conf-dir\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.285665 master-0 kubenswrapper[4031]: I0318 08:47:18.285610 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-os-release\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.285665 master-0 kubenswrapper[4031]: I0318 08:47:18.285628 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cni-binary-copy\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.285665 master-0 kubenswrapper[4031]: I0318 08:47:18.285644 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-system-cni-dir\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.285665 master-0 kubenswrapper[4031]: I0318 08:47:18.285662 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.286181 master-0 kubenswrapper[4031]: I0318 08:47:18.285693 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqfdm\" (UniqueName: \"kubernetes.io/projected/fdd2f1fd-1a94-4f4e-a275-b075f432f763-kube-api-access-fqfdm\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.334055 master-0 kubenswrapper[4031]: I0318 08:47:18.333972 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-h7vq8" Mar 18 08:47:18.352090 master-0 kubenswrapper[4031]: W0318 08:47:18.352028 4031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf1fbcf2_d4de_4015_89fc_2565e855a04d.slice/crio-c5f7a3fa8c5b36c0a6bb152021a58a30801cd5a1c1d2647cef87fa01a0bdd128 WatchSource:0}: Error finding container c5f7a3fa8c5b36c0a6bb152021a58a30801cd5a1c1d2647cef87fa01a0bdd128: Status 404 returned error can't find the container with id c5f7a3fa8c5b36c0a6bb152021a58a30801cd5a1c1d2647cef87fa01a0bdd128 Mar 18 08:47:18.386931 master-0 kubenswrapper[4031]: I0318 08:47:18.386821 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cnibin\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.386931 master-0 kubenswrapper[4031]: I0318 08:47:18.386917 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-tuning-conf-dir\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.387304 master-0 kubenswrapper[4031]: I0318 08:47:18.386967 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.387304 master-0 kubenswrapper[4031]: I0318 08:47:18.387018 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cnibin\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.387304 master-0 kubenswrapper[4031]: I0318 08:47:18.387250 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-os-release\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.387627 master-0 kubenswrapper[4031]: I0318 08:47:18.387319 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-os-release\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.387627 master-0 kubenswrapper[4031]: I0318 08:47:18.387333 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-system-cni-dir\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.387627 master-0 kubenswrapper[4031]: I0318 08:47:18.387321 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-tuning-conf-dir\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.387627 master-0 kubenswrapper[4031]: I0318 08:47:18.387393 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cni-binary-copy\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.387627 master-0 kubenswrapper[4031]: I0318 08:47:18.387463 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.387627 master-0 kubenswrapper[4031]: I0318 08:47:18.387495 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-system-cni-dir\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.387627 master-0 kubenswrapper[4031]: I0318 08:47:18.387602 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqfdm\" (UniqueName: \"kubernetes.io/projected/fdd2f1fd-1a94-4f4e-a275-b075f432f763-kube-api-access-fqfdm\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.388279 master-0 kubenswrapper[4031]: I0318 08:47:18.388204 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cni-binary-copy\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.388377 master-0 kubenswrapper[4031]: I0318 08:47:18.388345 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.388644 master-0 kubenswrapper[4031]: I0318 08:47:18.388598 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.417346 master-0 kubenswrapper[4031]: I0318 08:47:18.417248 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqfdm\" (UniqueName: \"kubernetes.io/projected/fdd2f1fd-1a94-4f4e-a275-b075f432f763-kube-api-access-fqfdm\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.536050 master-0 kubenswrapper[4031]: I0318 08:47:18.535929 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:47:18.544582 master-0 kubenswrapper[4031]: W0318 08:47:18.544540 4031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdd2f1fd_1a94_4f4e_a275_b075f432f763.slice/crio-ad6bcec915e6b33c36d0b67453718438f61e0a96034cf76c3052d8a1d9e3df06 WatchSource:0}: Error finding container ad6bcec915e6b33c36d0b67453718438f61e0a96034cf76c3052d8a1d9e3df06: Status 404 returned error can't find the container with id ad6bcec915e6b33c36d0b67453718438f61e0a96034cf76c3052d8a1d9e3df06 Mar 18 08:47:19.000539 master-0 kubenswrapper[4031]: I0318 08:47:19.000430 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-2xs9n"] Mar 18 08:47:19.004317 master-0 kubenswrapper[4031]: I0318 08:47:19.000984 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:19.004317 master-0 kubenswrapper[4031]: E0318 08:47:19.001087 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:47:19.094120 master-0 kubenswrapper[4031]: I0318 08:47:19.093944 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:19.094120 master-0 kubenswrapper[4031]: I0318 08:47:19.094001 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47cpd\" (UniqueName: \"kubernetes.io/projected/e48101ca-f356-45e3-93d7-4e17b8d8066c-kube-api-access-47cpd\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:19.135484 master-0 kubenswrapper[4031]: I0318 08:47:19.135417 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h7vq8" event={"ID":"af1fbcf2-d4de-4015-89fc-2565e855a04d","Type":"ContainerStarted","Data":"c5f7a3fa8c5b36c0a6bb152021a58a30801cd5a1c1d2647cef87fa01a0bdd128"} Mar 18 08:47:19.137106 master-0 kubenswrapper[4031]: I0318 08:47:19.137022 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-68tmr" event={"ID":"fdd2f1fd-1a94-4f4e-a275-b075f432f763","Type":"ContainerStarted","Data":"ad6bcec915e6b33c36d0b67453718438f61e0a96034cf76c3052d8a1d9e3df06"} Mar 18 08:47:19.194863 master-0 kubenswrapper[4031]: I0318 08:47:19.194723 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:19.194863 master-0 kubenswrapper[4031]: I0318 08:47:19.194786 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47cpd\" (UniqueName: \"kubernetes.io/projected/e48101ca-f356-45e3-93d7-4e17b8d8066c-kube-api-access-47cpd\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:19.196875 master-0 kubenswrapper[4031]: E0318 08:47:19.195164 4031 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:47:19.196875 master-0 kubenswrapper[4031]: E0318 08:47:19.195338 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs podName:e48101ca-f356-45e3-93d7-4e17b8d8066c nodeName:}" failed. No retries permitted until 2026-03-18 08:47:19.695304956 +0000 UTC m=+54.624830006 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs") pod "network-metrics-daemon-2xs9n" (UID: "e48101ca-f356-45e3-93d7-4e17b8d8066c") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:47:19.214182 master-0 kubenswrapper[4031]: I0318 08:47:19.214105 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47cpd\" (UniqueName: \"kubernetes.io/projected/e48101ca-f356-45e3-93d7-4e17b8d8066c-kube-api-access-47cpd\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:19.702366 master-0 kubenswrapper[4031]: I0318 08:47:19.702267 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:19.702723 master-0 kubenswrapper[4031]: E0318 08:47:19.702464 4031 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:47:19.702723 master-0 kubenswrapper[4031]: E0318 08:47:19.702541 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs podName:e48101ca-f356-45e3-93d7-4e17b8d8066c nodeName:}" failed. No retries permitted until 2026-03-18 08:47:20.702517534 +0000 UTC m=+55.632042584 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs") pod "network-metrics-daemon-2xs9n" (UID: "e48101ca-f356-45e3-93d7-4e17b8d8066c") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:47:20.707080 master-0 kubenswrapper[4031]: I0318 08:47:20.707026 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:20.707550 master-0 kubenswrapper[4031]: E0318 08:47:20.707244 4031 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:47:20.707550 master-0 kubenswrapper[4031]: E0318 08:47:20.707343 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs podName:e48101ca-f356-45e3-93d7-4e17b8d8066c nodeName:}" failed. No retries permitted until 2026-03-18 08:47:22.707320404 +0000 UTC m=+57.636845424 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs") pod "network-metrics-daemon-2xs9n" (UID: "e48101ca-f356-45e3-93d7-4e17b8d8066c") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:47:20.892399 master-0 kubenswrapper[4031]: I0318 08:47:20.892352 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:20.892621 master-0 kubenswrapper[4031]: E0318 08:47:20.892482 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:47:22.146260 master-0 kubenswrapper[4031]: I0318 08:47:22.146081 4031 generic.go:334] "Generic (PLEG): container finished" podID="fdd2f1fd-1a94-4f4e-a275-b075f432f763" containerID="4d7c904f1acd55b9d920d547c73d752e1d361d2495697dc27fa3307ea6bf7119" exitCode=0 Mar 18 08:47:22.146260 master-0 kubenswrapper[4031]: I0318 08:47:22.146130 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-68tmr" event={"ID":"fdd2f1fd-1a94-4f4e-a275-b075f432f763","Type":"ContainerDied","Data":"4d7c904f1acd55b9d920d547c73d752e1d361d2495697dc27fa3307ea6bf7119"} Mar 18 08:47:22.723449 master-0 kubenswrapper[4031]: I0318 08:47:22.723400 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:22.723696 master-0 kubenswrapper[4031]: E0318 08:47:22.723648 4031 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:47:22.723802 master-0 kubenswrapper[4031]: E0318 08:47:22.723784 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs podName:e48101ca-f356-45e3-93d7-4e17b8d8066c nodeName:}" failed. No retries permitted until 2026-03-18 08:47:26.723751377 +0000 UTC m=+61.653276427 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs") pod "network-metrics-daemon-2xs9n" (UID: "e48101ca-f356-45e3-93d7-4e17b8d8066c") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:47:22.892373 master-0 kubenswrapper[4031]: I0318 08:47:22.892332 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:22.892555 master-0 kubenswrapper[4031]: E0318 08:47:22.892481 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:47:24.892310 master-0 kubenswrapper[4031]: I0318 08:47:24.891797 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:24.892310 master-0 kubenswrapper[4031]: E0318 08:47:24.891996 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:47:25.387594 master-0 kubenswrapper[4031]: I0318 08:47:25.387532 4031 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 08:47:26.764790 master-0 kubenswrapper[4031]: I0318 08:47:26.763763 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:26.764790 master-0 kubenswrapper[4031]: E0318 08:47:26.764021 4031 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:47:26.764790 master-0 kubenswrapper[4031]: E0318 08:47:26.764289 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs podName:e48101ca-f356-45e3-93d7-4e17b8d8066c nodeName:}" failed. No retries permitted until 2026-03-18 08:47:34.764261372 +0000 UTC m=+69.693786402 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs") pod "network-metrics-daemon-2xs9n" (UID: "e48101ca-f356-45e3-93d7-4e17b8d8066c") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:47:26.892196 master-0 kubenswrapper[4031]: I0318 08:47:26.892141 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:26.892357 master-0 kubenswrapper[4031]: E0318 08:47:26.892271 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:47:28.892022 master-0 kubenswrapper[4031]: I0318 08:47:28.891971 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:28.892467 master-0 kubenswrapper[4031]: E0318 08:47:28.892116 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:47:29.165789 master-0 kubenswrapper[4031]: I0318 08:47:29.165646 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-68tmr" event={"ID":"fdd2f1fd-1a94-4f4e-a275-b075f432f763","Type":"ContainerStarted","Data":"8e721654d7a6dd53ba602bb38e73e10bda4fb74bd83575e72d850a92e1f3620b"} Mar 18 08:47:30.175077 master-0 kubenswrapper[4031]: I0318 08:47:30.174961 4031 generic.go:334] "Generic (PLEG): container finished" podID="fdd2f1fd-1a94-4f4e-a275-b075f432f763" containerID="8e721654d7a6dd53ba602bb38e73e10bda4fb74bd83575e72d850a92e1f3620b" exitCode=0 Mar 18 08:47:30.175077 master-0 kubenswrapper[4031]: I0318 08:47:30.175044 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-68tmr" event={"ID":"fdd2f1fd-1a94-4f4e-a275-b075f432f763","Type":"ContainerDied","Data":"8e721654d7a6dd53ba602bb38e73e10bda4fb74bd83575e72d850a92e1f3620b"} Mar 18 08:47:30.409224 master-0 kubenswrapper[4031]: I0318 08:47:30.409151 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr"] Mar 18 08:47:30.410183 master-0 kubenswrapper[4031]: I0318 08:47:30.410138 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 08:47:30.413472 master-0 kubenswrapper[4031]: I0318 08:47:30.413357 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 08:47:30.415070 master-0 kubenswrapper[4031]: I0318 08:47:30.413674 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 08:47:30.415070 master-0 kubenswrapper[4031]: I0318 08:47:30.414009 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 08:47:30.415070 master-0 kubenswrapper[4031]: I0318 08:47:30.414210 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 08:47:30.415070 master-0 kubenswrapper[4031]: I0318 08:47:30.414444 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 08:47:30.486348 master-0 kubenswrapper[4031]: I0318 08:47:30.486265 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7cac1300-44c1-4a7d-8d14-efa9702ad9df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 08:47:30.486535 master-0 kubenswrapper[4031]: I0318 08:47:30.486363 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7cac1300-44c1-4a7d-8d14-efa9702ad9df-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 08:47:30.486535 master-0 kubenswrapper[4031]: I0318 08:47:30.486423 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dn5k\" (UniqueName: \"kubernetes.io/projected/7cac1300-44c1-4a7d-8d14-efa9702ad9df-kube-api-access-7dn5k\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 08:47:30.486535 master-0 kubenswrapper[4031]: I0318 08:47:30.486459 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7cac1300-44c1-4a7d-8d14-efa9702ad9df-env-overrides\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 08:47:30.587795 master-0 kubenswrapper[4031]: I0318 08:47:30.587708 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dn5k\" (UniqueName: \"kubernetes.io/projected/7cac1300-44c1-4a7d-8d14-efa9702ad9df-kube-api-access-7dn5k\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 08:47:30.588015 master-0 kubenswrapper[4031]: I0318 08:47:30.587926 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7cac1300-44c1-4a7d-8d14-efa9702ad9df-env-overrides\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 08:47:30.588071 master-0 kubenswrapper[4031]: I0318 08:47:30.588006 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7cac1300-44c1-4a7d-8d14-efa9702ad9df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 08:47:30.588119 master-0 kubenswrapper[4031]: I0318 08:47:30.588082 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7cac1300-44c1-4a7d-8d14-efa9702ad9df-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 08:47:30.589843 master-0 kubenswrapper[4031]: I0318 08:47:30.589278 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7cac1300-44c1-4a7d-8d14-efa9702ad9df-env-overrides\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 08:47:30.590053 master-0 kubenswrapper[4031]: I0318 08:47:30.590005 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7cac1300-44c1-4a7d-8d14-efa9702ad9df-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 08:47:30.594627 master-0 kubenswrapper[4031]: I0318 08:47:30.594173 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7cac1300-44c1-4a7d-8d14-efa9702ad9df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 08:47:30.617875 master-0 kubenswrapper[4031]: I0318 08:47:30.617811 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gcsfv"] Mar 18 08:47:30.618773 master-0 kubenswrapper[4031]: I0318 08:47:30.618740 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.620873 master-0 kubenswrapper[4031]: I0318 08:47:30.620831 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 08:47:30.621013 master-0 kubenswrapper[4031]: I0318 08:47:30.620842 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 08:47:30.625402 master-0 kubenswrapper[4031]: I0318 08:47:30.625359 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dn5k\" (UniqueName: \"kubernetes.io/projected/7cac1300-44c1-4a7d-8d14-efa9702ad9df-kube-api-access-7dn5k\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 08:47:30.688834 master-0 kubenswrapper[4031]: I0318 08:47:30.688715 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-run-systemd\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.688834 master-0 kubenswrapper[4031]: I0318 08:47:30.688772 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-run-netns\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.689039 master-0 kubenswrapper[4031]: I0318 08:47:30.688848 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-run-openvswitch\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.689039 master-0 kubenswrapper[4031]: I0318 08:47:30.688918 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.689039 master-0 kubenswrapper[4031]: I0318 08:47:30.688967 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-ovnkube-config\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.689039 master-0 kubenswrapper[4031]: I0318 08:47:30.689006 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-run-ovn-kubernetes\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.689190 master-0 kubenswrapper[4031]: I0318 08:47:30.689044 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhmbk\" (UniqueName: \"kubernetes.io/projected/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-kube-api-access-rhmbk\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.689190 master-0 kubenswrapper[4031]: I0318 08:47:30.689090 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-var-lib-openvswitch\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.689190 master-0 kubenswrapper[4031]: I0318 08:47:30.689143 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-env-overrides\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.689190 master-0 kubenswrapper[4031]: I0318 08:47:30.689171 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-kubelet\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.689336 master-0 kubenswrapper[4031]: I0318 08:47:30.689196 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-ovn-node-metrics-cert\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.689336 master-0 kubenswrapper[4031]: I0318 08:47:30.689234 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-cni-bin\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.689336 master-0 kubenswrapper[4031]: I0318 08:47:30.689249 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-etc-openvswitch\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.689336 master-0 kubenswrapper[4031]: I0318 08:47:30.689267 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-log-socket\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.689336 master-0 kubenswrapper[4031]: I0318 08:47:30.689314 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-slash\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.689532 master-0 kubenswrapper[4031]: I0318 08:47:30.689389 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-ovnkube-script-lib\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.689532 master-0 kubenswrapper[4031]: I0318 08:47:30.689458 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-systemd-units\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.689532 master-0 kubenswrapper[4031]: I0318 08:47:30.689481 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-run-ovn\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.689532 master-0 kubenswrapper[4031]: I0318 08:47:30.689500 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-node-log\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.689532 master-0 kubenswrapper[4031]: I0318 08:47:30.689521 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-cni-netd\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.745049 master-0 kubenswrapper[4031]: I0318 08:47:30.745027 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 08:47:30.790680 master-0 kubenswrapper[4031]: I0318 08:47:30.790051 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-run-netns\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.790680 master-0 kubenswrapper[4031]: I0318 08:47:30.790113 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-run-openvswitch\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.790680 master-0 kubenswrapper[4031]: I0318 08:47:30.790155 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.790680 master-0 kubenswrapper[4031]: I0318 08:47:30.790190 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-ovnkube-config\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.790680 master-0 kubenswrapper[4031]: I0318 08:47:30.790228 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhmbk\" (UniqueName: \"kubernetes.io/projected/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-kube-api-access-rhmbk\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.790680 master-0 kubenswrapper[4031]: I0318 08:47:30.790261 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-run-ovn-kubernetes\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.790680 master-0 kubenswrapper[4031]: I0318 08:47:30.790292 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-var-lib-openvswitch\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.790680 master-0 kubenswrapper[4031]: I0318 08:47:30.790321 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-env-overrides\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.790680 master-0 kubenswrapper[4031]: I0318 08:47:30.790350 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-kubelet\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.790680 master-0 kubenswrapper[4031]: I0318 08:47:30.790381 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-ovn-node-metrics-cert\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.790680 master-0 kubenswrapper[4031]: I0318 08:47:30.790435 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-etc-openvswitch\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.790680 master-0 kubenswrapper[4031]: I0318 08:47:30.790476 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-cni-bin\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.790680 master-0 kubenswrapper[4031]: I0318 08:47:30.790537 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-log-socket\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.790680 master-0 kubenswrapper[4031]: I0318 08:47:30.790621 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-slash\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.791170 master-0 kubenswrapper[4031]: I0318 08:47:30.790654 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-ovnkube-script-lib\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.791170 master-0 kubenswrapper[4031]: I0318 08:47:30.790902 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-run-ovn\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.791343 master-0 kubenswrapper[4031]: I0318 08:47:30.791326 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-var-lib-openvswitch\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.791403 master-0 kubenswrapper[4031]: I0318 08:47:30.791349 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-etc-openvswitch\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.791615 master-0 kubenswrapper[4031]: I0318 08:47:30.791583 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.791768 master-0 kubenswrapper[4031]: I0318 08:47:30.791732 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-ovnkube-script-lib\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.791839 master-0 kubenswrapper[4031]: I0318 08:47:30.791814 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-kubelet\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.792057 master-0 kubenswrapper[4031]: I0318 08:47:30.792033 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-run-netns\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.792105 master-0 kubenswrapper[4031]: I0318 08:47:30.792076 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-slash\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.792147 master-0 kubenswrapper[4031]: I0318 08:47:30.792104 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-node-log\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.792147 master-0 kubenswrapper[4031]: I0318 08:47:30.792138 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-systemd-units\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.792214 master-0 kubenswrapper[4031]: I0318 08:47:30.792156 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-cni-netd\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.792214 master-0 kubenswrapper[4031]: I0318 08:47:30.792175 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-run-systemd\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.792214 master-0 kubenswrapper[4031]: I0318 08:47:30.792206 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-run-systemd\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.792444 master-0 kubenswrapper[4031]: I0318 08:47:30.792236 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-node-log\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.792444 master-0 kubenswrapper[4031]: I0318 08:47:30.792256 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-systemd-units\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.792444 master-0 kubenswrapper[4031]: I0318 08:47:30.792276 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-run-ovn\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.792444 master-0 kubenswrapper[4031]: I0318 08:47:30.792297 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-run-ovn-kubernetes\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.792444 master-0 kubenswrapper[4031]: I0318 08:47:30.792339 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-run-openvswitch\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.792640 master-0 kubenswrapper[4031]: I0318 08:47:30.792495 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-log-socket\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.792640 master-0 kubenswrapper[4031]: I0318 08:47:30.792526 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-cni-netd\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.792640 master-0 kubenswrapper[4031]: I0318 08:47:30.792557 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-cni-bin\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.803995 master-0 kubenswrapper[4031]: I0318 08:47:30.795174 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-ovnkube-config\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.803995 master-0 kubenswrapper[4031]: I0318 08:47:30.795503 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-env-overrides\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.803995 master-0 kubenswrapper[4031]: I0318 08:47:30.796730 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-ovn-node-metrics-cert\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.806309 master-0 kubenswrapper[4031]: I0318 08:47:30.806281 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhmbk\" (UniqueName: \"kubernetes.io/projected/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-kube-api-access-rhmbk\") pod \"ovnkube-node-gcsfv\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:30.892243 master-0 kubenswrapper[4031]: I0318 08:47:30.892193 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:30.892439 master-0 kubenswrapper[4031]: E0318 08:47:30.892342 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:47:30.936958 master-0 kubenswrapper[4031]: I0318 08:47:30.936911 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:32.891784 master-0 kubenswrapper[4031]: I0318 08:47:32.891731 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:32.892421 master-0 kubenswrapper[4031]: E0318 08:47:32.891869 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:47:33.595897 master-0 kubenswrapper[4031]: I0318 08:47:33.595800 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-7r2q2"] Mar 18 08:47:33.596222 master-0 kubenswrapper[4031]: I0318 08:47:33.596177 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:47:33.596296 master-0 kubenswrapper[4031]: E0318 08:47:33.596242 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7r2q2" podUID="f198f770-5483-4499-abb6-06026f2c6b37" Mar 18 08:47:33.615012 master-0 kubenswrapper[4031]: I0318 08:47:33.614951 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk4w7\" (UniqueName: \"kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7\") pod \"network-check-target-7r2q2\" (UID: \"f198f770-5483-4499-abb6-06026f2c6b37\") " pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:47:33.615012 master-0 kubenswrapper[4031]: I0318 08:47:33.615002 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:47:33.615399 master-0 kubenswrapper[4031]: E0318 08:47:33.615112 4031 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:33.615399 master-0 kubenswrapper[4031]: E0318 08:47:33.615150 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert podName:85d361a2-3f83-4857-b96e-3e98fcf33463 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:05.615137557 +0000 UTC m=+100.544662567 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert") pod "cluster-version-operator-56d8475767-t9zrr" (UID: "85d361a2-3f83-4857-b96e-3e98fcf33463") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:47:33.715834 master-0 kubenswrapper[4031]: I0318 08:47:33.715784 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk4w7\" (UniqueName: \"kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7\") pod \"network-check-target-7r2q2\" (UID: \"f198f770-5483-4499-abb6-06026f2c6b37\") " pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:47:33.737695 master-0 kubenswrapper[4031]: E0318 08:47:33.737656 4031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 08:47:33.737695 master-0 kubenswrapper[4031]: E0318 08:47:33.737685 4031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 08:47:33.737695 master-0 kubenswrapper[4031]: E0318 08:47:33.737699 4031 projected.go:194] Error preparing data for projected volume kube-api-access-sk4w7 for pod openshift-network-diagnostics/network-check-target-7r2q2: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:47:33.738030 master-0 kubenswrapper[4031]: E0318 08:47:33.737750 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7 podName:f198f770-5483-4499-abb6-06026f2c6b37 nodeName:}" failed. No retries permitted until 2026-03-18 08:47:34.237733745 +0000 UTC m=+69.167258755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sk4w7" (UniqueName: "kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7") pod "network-check-target-7r2q2" (UID: "f198f770-5483-4499-abb6-06026f2c6b37") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:47:33.902923 master-0 kubenswrapper[4031]: W0318 08:47:33.902836 4031 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 18 08:47:33.903615 master-0 kubenswrapper[4031]: I0318 08:47:33.903558 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 18 08:47:34.206863 master-0 kubenswrapper[4031]: I0318 08:47:34.206784 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" event={"ID":"f45955c7-5b5e-4172-8ba8-17f6f42ab94f","Type":"ContainerStarted","Data":"9af47a1fce5f49f05d98ded301fb823e1f5cbb6403282d7c4e47623e10192f4e"} Mar 18 08:47:34.319768 master-0 kubenswrapper[4031]: I0318 08:47:34.319730 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk4w7\" (UniqueName: \"kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7\") pod \"network-check-target-7r2q2\" (UID: \"f198f770-5483-4499-abb6-06026f2c6b37\") " pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:47:34.319909 master-0 kubenswrapper[4031]: E0318 08:47:34.319888 4031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 08:47:34.319961 master-0 kubenswrapper[4031]: E0318 08:47:34.319912 4031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 08:47:34.319961 master-0 kubenswrapper[4031]: E0318 08:47:34.319925 4031 projected.go:194] Error preparing data for projected volume kube-api-access-sk4w7 for pod openshift-network-diagnostics/network-check-target-7r2q2: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:47:34.320020 master-0 kubenswrapper[4031]: E0318 08:47:34.319977 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7 podName:f198f770-5483-4499-abb6-06026f2c6b37 nodeName:}" failed. No retries permitted until 2026-03-18 08:47:35.31996059 +0000 UTC m=+70.249485610 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-sk4w7" (UniqueName: "kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7") pod "network-check-target-7r2q2" (UID: "f198f770-5483-4499-abb6-06026f2c6b37") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:47:34.824326 master-0 kubenswrapper[4031]: I0318 08:47:34.824061 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:34.824476 master-0 kubenswrapper[4031]: E0318 08:47:34.824228 4031 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:47:34.824476 master-0 kubenswrapper[4031]: E0318 08:47:34.824404 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs podName:e48101ca-f356-45e3-93d7-4e17b8d8066c nodeName:}" failed. No retries permitted until 2026-03-18 08:47:50.824383385 +0000 UTC m=+85.753908395 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs") pod "network-metrics-daemon-2xs9n" (UID: "e48101ca-f356-45e3-93d7-4e17b8d8066c") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:47:34.891913 master-0 kubenswrapper[4031]: I0318 08:47:34.891879 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:47:34.892047 master-0 kubenswrapper[4031]: I0318 08:47:34.891896 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:34.892047 master-0 kubenswrapper[4031]: E0318 08:47:34.892012 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7r2q2" podUID="f198f770-5483-4499-abb6-06026f2c6b37" Mar 18 08:47:34.892154 master-0 kubenswrapper[4031]: E0318 08:47:34.892122 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:47:35.209979 master-0 kubenswrapper[4031]: I0318 08:47:35.209918 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-h7vq8" event={"ID":"af1fbcf2-d4de-4015-89fc-2565e855a04d","Type":"ContainerStarted","Data":"741e3d11c7d6a5f8e0f391a861bb25690e50cb684db7c6be742c8320e2ed4d1c"} Mar 18 08:47:35.212123 master-0 kubenswrapper[4031]: I0318 08:47:35.212103 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" event={"ID":"7cac1300-44c1-4a7d-8d14-efa9702ad9df","Type":"ContainerStarted","Data":"86a3328feac9c249513336a7fa0f056e06a4294596c0e3710fd31d0dfd2c588c"} Mar 18 08:47:35.212177 master-0 kubenswrapper[4031]: I0318 08:47:35.212127 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" event={"ID":"7cac1300-44c1-4a7d-8d14-efa9702ad9df","Type":"ContainerStarted","Data":"9d66a0e1a66af3412b18eaf6bb7d49b378aad4df6e4a3ab8703f0492b2a8b438"} Mar 18 08:47:35.218809 master-0 kubenswrapper[4031]: I0318 08:47:35.218789 4031 generic.go:334] "Generic (PLEG): container finished" podID="fdd2f1fd-1a94-4f4e-a275-b075f432f763" containerID="18607609fc2c048f02839d5d864c5753901b636e45e41dd655403f7b6b802044" exitCode=0 Mar 18 08:47:35.218809 master-0 kubenswrapper[4031]: I0318 08:47:35.218814 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-68tmr" event={"ID":"fdd2f1fd-1a94-4f4e-a275-b075f432f763","Type":"ContainerDied","Data":"18607609fc2c048f02839d5d864c5753901b636e45e41dd655403f7b6b802044"} Mar 18 08:47:35.224291 master-0 kubenswrapper[4031]: I0318 08:47:35.224201 4031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=2.224173567 podStartE2EDuration="2.224173567s" podCreationTimestamp="2026-03-18 08:47:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:47:35.220808362 +0000 UTC m=+70.150333372" watchObservedRunningTime="2026-03-18 08:47:35.224173567 +0000 UTC m=+70.153698587" Mar 18 08:47:35.251351 master-0 kubenswrapper[4031]: I0318 08:47:35.251244 4031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-h7vq8" podStartSLOduration=2.284718029 podStartE2EDuration="18.251224326s" podCreationTimestamp="2026-03-18 08:47:17 +0000 UTC" firstStartedPulling="2026-03-18 08:47:18.355611141 +0000 UTC m=+53.285136181" lastFinishedPulling="2026-03-18 08:47:34.322117428 +0000 UTC m=+69.251642478" observedRunningTime="2026-03-18 08:47:35.235904571 +0000 UTC m=+70.165429601" watchObservedRunningTime="2026-03-18 08:47:35.251224326 +0000 UTC m=+70.180749336" Mar 18 08:47:35.329761 master-0 kubenswrapper[4031]: I0318 08:47:35.329695 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk4w7\" (UniqueName: \"kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7\") pod \"network-check-target-7r2q2\" (UID: \"f198f770-5483-4499-abb6-06026f2c6b37\") " pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:47:35.329937 master-0 kubenswrapper[4031]: E0318 08:47:35.329869 4031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 08:47:35.329937 master-0 kubenswrapper[4031]: E0318 08:47:35.329884 4031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 08:47:35.329937 master-0 kubenswrapper[4031]: E0318 08:47:35.329893 4031 projected.go:194] Error preparing data for projected volume kube-api-access-sk4w7 for pod openshift-network-diagnostics/network-check-target-7r2q2: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:47:35.329937 master-0 kubenswrapper[4031]: E0318 08:47:35.329933 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7 podName:f198f770-5483-4499-abb6-06026f2c6b37 nodeName:}" failed. No retries permitted until 2026-03-18 08:47:37.329920236 +0000 UTC m=+72.259445246 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-sk4w7" (UniqueName: "kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7") pod "network-check-target-7r2q2" (UID: "f198f770-5483-4499-abb6-06026f2c6b37") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:47:36.207092 master-0 kubenswrapper[4031]: I0318 08:47:36.206984 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-lf7kq"] Mar 18 08:47:36.207620 master-0 kubenswrapper[4031]: I0318 08:47:36.207538 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 08:47:36.210170 master-0 kubenswrapper[4031]: I0318 08:47:36.210096 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 08:47:36.210758 master-0 kubenswrapper[4031]: I0318 08:47:36.210166 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 08:47:36.212089 master-0 kubenswrapper[4031]: I0318 08:47:36.212038 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 08:47:36.213426 master-0 kubenswrapper[4031]: I0318 08:47:36.213370 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 08:47:36.213734 master-0 kubenswrapper[4031]: I0318 08:47:36.213696 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 08:47:36.237885 master-0 kubenswrapper[4031]: I0318 08:47:36.237776 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-webhook-cert\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 08:47:36.237885 master-0 kubenswrapper[4031]: I0318 08:47:36.237826 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-ovnkube-identity-cm\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 08:47:36.238270 master-0 kubenswrapper[4031]: I0318 08:47:36.238082 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnzhn\" (UniqueName: \"kubernetes.io/projected/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-kube-api-access-fnzhn\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 08:47:36.238270 master-0 kubenswrapper[4031]: I0318 08:47:36.238180 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-env-overrides\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 08:47:36.339367 master-0 kubenswrapper[4031]: I0318 08:47:36.339267 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-webhook-cert\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 08:47:36.339715 master-0 kubenswrapper[4031]: E0318 08:47:36.339498 4031 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: secret "network-node-identity-cert" not found Mar 18 08:47:36.339715 master-0 kubenswrapper[4031]: I0318 08:47:36.339532 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-ovnkube-identity-cm\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 08:47:36.339715 master-0 kubenswrapper[4031]: E0318 08:47:36.339711 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-webhook-cert podName:57affd8b-d1ce-40d2-b31e-7b18645ca7b6 nodeName:}" failed. No retries permitted until 2026-03-18 08:47:36.839681356 +0000 UTC m=+71.769206406 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-webhook-cert") pod "network-node-identity-lf7kq" (UID: "57affd8b-d1ce-40d2-b31e-7b18645ca7b6") : secret "network-node-identity-cert" not found Mar 18 08:47:36.339928 master-0 kubenswrapper[4031]: I0318 08:47:36.339790 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnzhn\" (UniqueName: \"kubernetes.io/projected/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-kube-api-access-fnzhn\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 08:47:36.339928 master-0 kubenswrapper[4031]: I0318 08:47:36.339845 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-env-overrides\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 08:47:36.341191 master-0 kubenswrapper[4031]: I0318 08:47:36.341126 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-ovnkube-identity-cm\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 08:47:36.342076 master-0 kubenswrapper[4031]: I0318 08:47:36.341987 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-env-overrides\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 08:47:36.425347 master-0 kubenswrapper[4031]: I0318 08:47:36.425283 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnzhn\" (UniqueName: \"kubernetes.io/projected/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-kube-api-access-fnzhn\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 08:47:36.843167 master-0 kubenswrapper[4031]: I0318 08:47:36.843110 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-webhook-cert\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 08:47:36.846644 master-0 kubenswrapper[4031]: I0318 08:47:36.846612 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-webhook-cert\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 08:47:36.892078 master-0 kubenswrapper[4031]: I0318 08:47:36.892004 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:47:36.892078 master-0 kubenswrapper[4031]: I0318 08:47:36.892051 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:36.892287 master-0 kubenswrapper[4031]: E0318 08:47:36.892177 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7r2q2" podUID="f198f770-5483-4499-abb6-06026f2c6b37" Mar 18 08:47:36.892287 master-0 kubenswrapper[4031]: E0318 08:47:36.892262 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:47:37.125445 master-0 kubenswrapper[4031]: I0318 08:47:37.125287 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 08:47:37.146857 master-0 kubenswrapper[4031]: W0318 08:47:37.146767 4031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57affd8b_d1ce_40d2_b31e_7b18645ca7b6.slice/crio-81206bcb1647b4c08efbfd17569fc7b2653680bdd8064b363b1831673479ee1e WatchSource:0}: Error finding container 81206bcb1647b4c08efbfd17569fc7b2653680bdd8064b363b1831673479ee1e: Status 404 returned error can't find the container with id 81206bcb1647b4c08efbfd17569fc7b2653680bdd8064b363b1831673479ee1e Mar 18 08:47:37.226884 master-0 kubenswrapper[4031]: I0318 08:47:37.226787 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-lf7kq" event={"ID":"57affd8b-d1ce-40d2-b31e-7b18645ca7b6","Type":"ContainerStarted","Data":"81206bcb1647b4c08efbfd17569fc7b2653680bdd8064b363b1831673479ee1e"} Mar 18 08:47:37.349534 master-0 kubenswrapper[4031]: I0318 08:47:37.349431 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk4w7\" (UniqueName: \"kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7\") pod \"network-check-target-7r2q2\" (UID: \"f198f770-5483-4499-abb6-06026f2c6b37\") " pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:47:37.349798 master-0 kubenswrapper[4031]: E0318 08:47:37.349683 4031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 08:47:37.349798 master-0 kubenswrapper[4031]: E0318 08:47:37.349713 4031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 08:47:37.349798 master-0 kubenswrapper[4031]: E0318 08:47:37.349727 4031 projected.go:194] Error preparing data for projected volume kube-api-access-sk4w7 for pod openshift-network-diagnostics/network-check-target-7r2q2: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:47:37.349798 master-0 kubenswrapper[4031]: E0318 08:47:37.349794 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7 podName:f198f770-5483-4499-abb6-06026f2c6b37 nodeName:}" failed. No retries permitted until 2026-03-18 08:47:41.349773274 +0000 UTC m=+76.279298294 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-sk4w7" (UniqueName: "kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7") pod "network-check-target-7r2q2" (UID: "f198f770-5483-4499-abb6-06026f2c6b37") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:47:38.892407 master-0 kubenswrapper[4031]: I0318 08:47:38.892352 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:38.893058 master-0 kubenswrapper[4031]: I0318 08:47:38.892447 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:47:38.893058 master-0 kubenswrapper[4031]: E0318 08:47:38.892558 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:47:38.893058 master-0 kubenswrapper[4031]: E0318 08:47:38.892677 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7r2q2" podUID="f198f770-5483-4499-abb6-06026f2c6b37" Mar 18 08:47:40.892320 master-0 kubenswrapper[4031]: I0318 08:47:40.892266 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:40.892320 master-0 kubenswrapper[4031]: I0318 08:47:40.892299 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:47:40.893297 master-0 kubenswrapper[4031]: E0318 08:47:40.892425 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:47:40.893297 master-0 kubenswrapper[4031]: E0318 08:47:40.892589 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7r2q2" podUID="f198f770-5483-4499-abb6-06026f2c6b37" Mar 18 08:47:41.237302 master-0 kubenswrapper[4031]: I0318 08:47:41.237254 4031 generic.go:334] "Generic (PLEG): container finished" podID="fdd2f1fd-1a94-4f4e-a275-b075f432f763" containerID="29cb6a70b4f03bbaa88bb2a9cd200f77d44062bf7d6a056e592a38539d450a65" exitCode=0 Mar 18 08:47:41.237302 master-0 kubenswrapper[4031]: I0318 08:47:41.237295 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-68tmr" event={"ID":"fdd2f1fd-1a94-4f4e-a275-b075f432f763","Type":"ContainerDied","Data":"29cb6a70b4f03bbaa88bb2a9cd200f77d44062bf7d6a056e592a38539d450a65"} Mar 18 08:47:41.386672 master-0 kubenswrapper[4031]: I0318 08:47:41.386585 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk4w7\" (UniqueName: \"kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7\") pod \"network-check-target-7r2q2\" (UID: \"f198f770-5483-4499-abb6-06026f2c6b37\") " pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:47:41.386890 master-0 kubenswrapper[4031]: E0318 08:47:41.386836 4031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 08:47:41.386890 master-0 kubenswrapper[4031]: E0318 08:47:41.386880 4031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 08:47:41.387031 master-0 kubenswrapper[4031]: E0318 08:47:41.386898 4031 projected.go:194] Error preparing data for projected volume kube-api-access-sk4w7 for pod openshift-network-diagnostics/network-check-target-7r2q2: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:47:41.387031 master-0 kubenswrapper[4031]: E0318 08:47:41.386979 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7 podName:f198f770-5483-4499-abb6-06026f2c6b37 nodeName:}" failed. No retries permitted until 2026-03-18 08:47:49.386952865 +0000 UTC m=+84.316477925 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-sk4w7" (UniqueName: "kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7") pod "network-check-target-7r2q2" (UID: "f198f770-5483-4499-abb6-06026f2c6b37") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:47:42.891850 master-0 kubenswrapper[4031]: I0318 08:47:42.891791 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:47:42.892674 master-0 kubenswrapper[4031]: E0318 08:47:42.891990 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7r2q2" podUID="f198f770-5483-4499-abb6-06026f2c6b37" Mar 18 08:47:42.892674 master-0 kubenswrapper[4031]: I0318 08:47:42.892080 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:42.892674 master-0 kubenswrapper[4031]: E0318 08:47:42.892216 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:47:44.892163 master-0 kubenswrapper[4031]: I0318 08:47:44.892098 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:44.893288 master-0 kubenswrapper[4031]: I0318 08:47:44.892129 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:47:44.893288 master-0 kubenswrapper[4031]: E0318 08:47:44.892358 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:47:44.893288 master-0 kubenswrapper[4031]: E0318 08:47:44.892487 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7r2q2" podUID="f198f770-5483-4499-abb6-06026f2c6b37" Mar 18 08:47:46.891605 master-0 kubenswrapper[4031]: I0318 08:47:46.891538 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:46.892460 master-0 kubenswrapper[4031]: I0318 08:47:46.891606 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:47:46.892460 master-0 kubenswrapper[4031]: E0318 08:47:46.891725 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:47:46.892460 master-0 kubenswrapper[4031]: E0318 08:47:46.891846 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7r2q2" podUID="f198f770-5483-4499-abb6-06026f2c6b37" Mar 18 08:47:48.892709 master-0 kubenswrapper[4031]: I0318 08:47:48.892194 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:47:48.892709 master-0 kubenswrapper[4031]: E0318 08:47:48.892314 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7r2q2" podUID="f198f770-5483-4499-abb6-06026f2c6b37" Mar 18 08:47:48.893404 master-0 kubenswrapper[4031]: I0318 08:47:48.893324 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:48.893404 master-0 kubenswrapper[4031]: E0318 08:47:48.893382 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:47:49.460438 master-0 kubenswrapper[4031]: I0318 08:47:49.459844 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk4w7\" (UniqueName: \"kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7\") pod \"network-check-target-7r2q2\" (UID: \"f198f770-5483-4499-abb6-06026f2c6b37\") " pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:47:49.460438 master-0 kubenswrapper[4031]: E0318 08:47:49.460043 4031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 08:47:49.460438 master-0 kubenswrapper[4031]: E0318 08:47:49.460060 4031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 08:47:49.460438 master-0 kubenswrapper[4031]: E0318 08:47:49.460071 4031 projected.go:194] Error preparing data for projected volume kube-api-access-sk4w7 for pod openshift-network-diagnostics/network-check-target-7r2q2: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:47:49.460438 master-0 kubenswrapper[4031]: E0318 08:47:49.460125 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7 podName:f198f770-5483-4499-abb6-06026f2c6b37 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:05.460104822 +0000 UTC m=+100.389629832 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-sk4w7" (UniqueName: "kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7") pod "network-check-target-7r2q2" (UID: "f198f770-5483-4499-abb6-06026f2c6b37") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:47:50.871393 master-0 kubenswrapper[4031]: I0318 08:47:50.871345 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:50.872087 master-0 kubenswrapper[4031]: E0318 08:47:50.871485 4031 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:47:50.872087 master-0 kubenswrapper[4031]: E0318 08:47:50.871598 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs podName:e48101ca-f356-45e3-93d7-4e17b8d8066c nodeName:}" failed. No retries permitted until 2026-03-18 08:48:22.871580498 +0000 UTC m=+117.801105498 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs") pod "network-metrics-daemon-2xs9n" (UID: "e48101ca-f356-45e3-93d7-4e17b8d8066c") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 18 08:47:50.892605 master-0 kubenswrapper[4031]: I0318 08:47:50.892554 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:50.892692 master-0 kubenswrapper[4031]: I0318 08:47:50.892605 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:47:50.892742 master-0 kubenswrapper[4031]: E0318 08:47:50.892690 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:47:50.892822 master-0 kubenswrapper[4031]: E0318 08:47:50.892780 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7r2q2" podUID="f198f770-5483-4499-abb6-06026f2c6b37" Mar 18 08:47:52.892525 master-0 kubenswrapper[4031]: I0318 08:47:52.892477 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:52.893007 master-0 kubenswrapper[4031]: I0318 08:47:52.892531 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:47:52.893007 master-0 kubenswrapper[4031]: E0318 08:47:52.892632 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:47:52.893007 master-0 kubenswrapper[4031]: E0318 08:47:52.892855 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7r2q2" podUID="f198f770-5483-4499-abb6-06026f2c6b37" Mar 18 08:47:52.923060 master-0 kubenswrapper[4031]: I0318 08:47:52.923019 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 18 08:47:54.892328 master-0 kubenswrapper[4031]: I0318 08:47:54.892249 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:47:54.892328 master-0 kubenswrapper[4031]: I0318 08:47:54.892296 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:54.892847 master-0 kubenswrapper[4031]: E0318 08:47:54.892435 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7r2q2" podUID="f198f770-5483-4499-abb6-06026f2c6b37" Mar 18 08:47:54.892847 master-0 kubenswrapper[4031]: E0318 08:47:54.892762 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:47:56.035853 master-0 kubenswrapper[4031]: I0318 08:47:56.035743 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 08:47:56.892133 master-0 kubenswrapper[4031]: I0318 08:47:56.891965 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:47:56.892688 master-0 kubenswrapper[4031]: I0318 08:47:56.891963 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:56.892688 master-0 kubenswrapper[4031]: E0318 08:47:56.892151 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7r2q2" podUID="f198f770-5483-4499-abb6-06026f2c6b37" Mar 18 08:47:56.892688 master-0 kubenswrapper[4031]: E0318 08:47:56.892309 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:47:57.144397 master-0 kubenswrapper[4031]: I0318 08:47:57.144174 4031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=5.144132957 podStartE2EDuration="5.144132957s" podCreationTimestamp="2026-03-18 08:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:47:56.308318978 +0000 UTC m=+91.237843988" watchObservedRunningTime="2026-03-18 08:47:57.144132957 +0000 UTC m=+92.073658007" Mar 18 08:47:57.146845 master-0 kubenswrapper[4031]: I0318 08:47:57.146749 4031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gcsfv"] Mar 18 08:47:58.892381 master-0 kubenswrapper[4031]: I0318 08:47:58.892326 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:47:58.892899 master-0 kubenswrapper[4031]: E0318 08:47:58.892496 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:47:58.892899 master-0 kubenswrapper[4031]: I0318 08:47:58.892809 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:47:58.893046 master-0 kubenswrapper[4031]: E0318 08:47:58.892989 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7r2q2" podUID="f198f770-5483-4499-abb6-06026f2c6b37" Mar 18 08:47:59.297834 master-0 kubenswrapper[4031]: I0318 08:47:59.297776 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" event={"ID":"7cac1300-44c1-4a7d-8d14-efa9702ad9df","Type":"ContainerStarted","Data":"fdb4bcca892ef3b8b38b6412f754f472839917394e632bf7ec218fe086926be2"} Mar 18 08:47:59.301547 master-0 kubenswrapper[4031]: I0318 08:47:59.301500 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-lf7kq" event={"ID":"57affd8b-d1ce-40d2-b31e-7b18645ca7b6","Type":"ContainerStarted","Data":"7a5f71287e8b5eb717808046e6ba2bfb7e60eb4819b757b6fc0b37c1ed02f420"} Mar 18 08:47:59.301694 master-0 kubenswrapper[4031]: I0318 08:47:59.301548 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-lf7kq" event={"ID":"57affd8b-d1ce-40d2-b31e-7b18645ca7b6","Type":"ContainerStarted","Data":"bc980214f5ac00955c18c85e33c06b43e507e05072c88b82fc62ef6405a7548a"} Mar 18 08:47:59.303731 master-0 kubenswrapper[4031]: I0318 08:47:59.303672 4031 generic.go:334] "Generic (PLEG): container finished" podID="f45955c7-5b5e-4172-8ba8-17f6f42ab94f" containerID="ebc0e0f0f29deaa42b66d4757db6acb7bcb2013de10c6d0ece78be8a41d14a9a" exitCode=0 Mar 18 08:47:59.303801 master-0 kubenswrapper[4031]: I0318 08:47:59.303770 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" event={"ID":"f45955c7-5b5e-4172-8ba8-17f6f42ab94f","Type":"ContainerDied","Data":"ebc0e0f0f29deaa42b66d4757db6acb7bcb2013de10c6d0ece78be8a41d14a9a"} Mar 18 08:47:59.310183 master-0 kubenswrapper[4031]: I0318 08:47:59.310136 4031 generic.go:334] "Generic (PLEG): container finished" podID="fdd2f1fd-1a94-4f4e-a275-b075f432f763" containerID="332c9bf8c34c932234aed0104fb033cece220b16a730251a8ed2dddb4807fbb9" exitCode=0 Mar 18 08:47:59.310360 master-0 kubenswrapper[4031]: I0318 08:47:59.310195 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-68tmr" event={"ID":"fdd2f1fd-1a94-4f4e-a275-b075f432f763","Type":"ContainerDied","Data":"332c9bf8c34c932234aed0104fb033cece220b16a730251a8ed2dddb4807fbb9"} Mar 18 08:47:59.320935 master-0 kubenswrapper[4031]: I0318 08:47:59.320847 4031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:47:59.323742 master-0 kubenswrapper[4031]: I0318 08:47:59.323655 4031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" podStartSLOduration=4.989892968 podStartE2EDuration="29.323621907s" podCreationTimestamp="2026-03-18 08:47:30 +0000 UTC" firstStartedPulling="2026-03-18 08:47:34.474037625 +0000 UTC m=+69.403562635" lastFinishedPulling="2026-03-18 08:47:58.807766534 +0000 UTC m=+93.737291574" observedRunningTime="2026-03-18 08:47:59.323161306 +0000 UTC m=+94.252686326" watchObservedRunningTime="2026-03-18 08:47:59.323621907 +0000 UTC m=+94.253146957" Mar 18 08:47:59.324691 master-0 kubenswrapper[4031]: I0318 08:47:59.324632 4031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=4.324623659 podStartE2EDuration="4.324623659s" podCreationTimestamp="2026-03-18 08:47:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:47:59.309627092 +0000 UTC m=+94.239152142" watchObservedRunningTime="2026-03-18 08:47:59.324623659 +0000 UTC m=+94.254148709" Mar 18 08:47:59.369359 master-0 kubenswrapper[4031]: I0318 08:47:59.366045 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-slash\") pod \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " Mar 18 08:47:59.369359 master-0 kubenswrapper[4031]: I0318 08:47:59.366541 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhmbk\" (UniqueName: \"kubernetes.io/projected/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-kube-api-access-rhmbk\") pod \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " Mar 18 08:47:59.369359 master-0 kubenswrapper[4031]: I0318 08:47:59.366753 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-cni-bin\") pod \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " Mar 18 08:47:59.369359 master-0 kubenswrapper[4031]: I0318 08:47:59.366804 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-etc-openvswitch\") pod \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " Mar 18 08:47:59.369359 master-0 kubenswrapper[4031]: I0318 08:47:59.366838 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-run-systemd\") pod \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " Mar 18 08:47:59.369359 master-0 kubenswrapper[4031]: I0318 08:47:59.366871 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-var-lib-openvswitch\") pod \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " Mar 18 08:47:59.369359 master-0 kubenswrapper[4031]: I0318 08:47:59.366903 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-ovnkube-config\") pod \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " Mar 18 08:47:59.369359 master-0 kubenswrapper[4031]: I0318 08:47:59.366928 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-run-ovn-kubernetes\") pod \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " Mar 18 08:47:59.369359 master-0 kubenswrapper[4031]: I0318 08:47:59.366956 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-env-overrides\") pod \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " Mar 18 08:47:59.369359 master-0 kubenswrapper[4031]: I0318 08:47:59.366985 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-run-openvswitch\") pod \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " Mar 18 08:47:59.369359 master-0 kubenswrapper[4031]: I0318 08:47:59.367051 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-run-ovn\") pod \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " Mar 18 08:47:59.369359 master-0 kubenswrapper[4031]: I0318 08:47:59.367076 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-cni-netd\") pod \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " Mar 18 08:47:59.369359 master-0 kubenswrapper[4031]: I0318 08:47:59.367102 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-run-netns\") pod \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " Mar 18 08:47:59.369359 master-0 kubenswrapper[4031]: I0318 08:47:59.367127 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-node-log\") pod \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " Mar 18 08:47:59.369359 master-0 kubenswrapper[4031]: I0318 08:47:59.367153 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-ovn-node-metrics-cert\") pod \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " Mar 18 08:47:59.369359 master-0 kubenswrapper[4031]: I0318 08:47:59.366798 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-slash" (OuterVolumeSpecName: "host-slash") pod "f45955c7-5b5e-4172-8ba8-17f6f42ab94f" (UID: "f45955c7-5b5e-4172-8ba8-17f6f42ab94f"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:47:59.369359 master-0 kubenswrapper[4031]: I0318 08:47:59.367530 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "f45955c7-5b5e-4172-8ba8-17f6f42ab94f" (UID: "f45955c7-5b5e-4172-8ba8-17f6f42ab94f"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:47:59.369359 master-0 kubenswrapper[4031]: I0318 08:47:59.367561 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "f45955c7-5b5e-4172-8ba8-17f6f42ab94f" (UID: "f45955c7-5b5e-4172-8ba8-17f6f42ab94f"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:47:59.370301 master-0 kubenswrapper[4031]: I0318 08:47:59.367607 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "f45955c7-5b5e-4172-8ba8-17f6f42ab94f" (UID: "f45955c7-5b5e-4172-8ba8-17f6f42ab94f"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:47:59.370301 master-0 kubenswrapper[4031]: I0318 08:47:59.367740 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "f45955c7-5b5e-4172-8ba8-17f6f42ab94f" (UID: "f45955c7-5b5e-4172-8ba8-17f6f42ab94f"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:47:59.370301 master-0 kubenswrapper[4031]: I0318 08:47:59.367810 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "f45955c7-5b5e-4172-8ba8-17f6f42ab94f" (UID: "f45955c7-5b5e-4172-8ba8-17f6f42ab94f"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:47:59.370301 master-0 kubenswrapper[4031]: I0318 08:47:59.368174 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "f45955c7-5b5e-4172-8ba8-17f6f42ab94f" (UID: "f45955c7-5b5e-4172-8ba8-17f6f42ab94f"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:47:59.370301 master-0 kubenswrapper[4031]: I0318 08:47:59.368275 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "f45955c7-5b5e-4172-8ba8-17f6f42ab94f" (UID: "f45955c7-5b5e-4172-8ba8-17f6f42ab94f"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:47:59.370301 master-0 kubenswrapper[4031]: I0318 08:47:59.368337 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-node-log" (OuterVolumeSpecName: "node-log") pod "f45955c7-5b5e-4172-8ba8-17f6f42ab94f" (UID: "f45955c7-5b5e-4172-8ba8-17f6f42ab94f"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:47:59.370301 master-0 kubenswrapper[4031]: I0318 08:47:59.368347 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "f45955c7-5b5e-4172-8ba8-17f6f42ab94f" (UID: "f45955c7-5b5e-4172-8ba8-17f6f42ab94f"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:47:59.370301 master-0 kubenswrapper[4031]: I0318 08:47:59.368364 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "f45955c7-5b5e-4172-8ba8-17f6f42ab94f" (UID: "f45955c7-5b5e-4172-8ba8-17f6f42ab94f"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:47:59.370301 master-0 kubenswrapper[4031]: I0318 08:47:59.367841 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "f45955c7-5b5e-4172-8ba8-17f6f42ab94f" (UID: "f45955c7-5b5e-4172-8ba8-17f6f42ab94f"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:47:59.370301 master-0 kubenswrapper[4031]: I0318 08:47:59.368901 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "f45955c7-5b5e-4172-8ba8-17f6f42ab94f" (UID: "f45955c7-5b5e-4172-8ba8-17f6f42ab94f"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:47:59.370301 master-0 kubenswrapper[4031]: I0318 08:47:59.370000 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " Mar 18 08:47:59.370301 master-0 kubenswrapper[4031]: I0318 08:47:59.370047 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-ovnkube-script-lib\") pod \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " Mar 18 08:47:59.370301 master-0 kubenswrapper[4031]: I0318 08:47:59.370078 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-systemd-units\") pod \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " Mar 18 08:47:59.370301 master-0 kubenswrapper[4031]: I0318 08:47:59.370108 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-kubelet\") pod \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " Mar 18 08:47:59.370301 master-0 kubenswrapper[4031]: I0318 08:47:59.370093 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "f45955c7-5b5e-4172-8ba8-17f6f42ab94f" (UID: "f45955c7-5b5e-4172-8ba8-17f6f42ab94f"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:47:59.370933 master-0 kubenswrapper[4031]: I0318 08:47:59.370130 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "f45955c7-5b5e-4172-8ba8-17f6f42ab94f" (UID: "f45955c7-5b5e-4172-8ba8-17f6f42ab94f"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:47:59.370933 master-0 kubenswrapper[4031]: I0318 08:47:59.370136 4031 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-log-socket\") pod \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\" (UID: \"f45955c7-5b5e-4172-8ba8-17f6f42ab94f\") " Mar 18 08:47:59.370933 master-0 kubenswrapper[4031]: I0318 08:47:59.370167 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "f45955c7-5b5e-4172-8ba8-17f6f42ab94f" (UID: "f45955c7-5b5e-4172-8ba8-17f6f42ab94f"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:47:59.370933 master-0 kubenswrapper[4031]: I0318 08:47:59.370171 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-log-socket" (OuterVolumeSpecName: "log-socket") pod "f45955c7-5b5e-4172-8ba8-17f6f42ab94f" (UID: "f45955c7-5b5e-4172-8ba8-17f6f42ab94f"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:47:59.370933 master-0 kubenswrapper[4031]: I0318 08:47:59.370550 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "f45955c7-5b5e-4172-8ba8-17f6f42ab94f" (UID: "f45955c7-5b5e-4172-8ba8-17f6f42ab94f"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:47:59.371254 master-0 kubenswrapper[4031]: I0318 08:47:59.370974 4031 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-node-log\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:59.371254 master-0 kubenswrapper[4031]: I0318 08:47:59.370999 4031 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:59.371254 master-0 kubenswrapper[4031]: I0318 08:47:59.371029 4031 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:59.371254 master-0 kubenswrapper[4031]: I0318 08:47:59.371044 4031 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-systemd-units\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:59.371254 master-0 kubenswrapper[4031]: I0318 08:47:59.371058 4031 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-kubelet\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:59.371254 master-0 kubenswrapper[4031]: I0318 08:47:59.371071 4031 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-log-socket\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:59.371254 master-0 kubenswrapper[4031]: I0318 08:47:59.371085 4031 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-slash\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:59.371254 master-0 kubenswrapper[4031]: I0318 08:47:59.371097 4031 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:59.371660 master-0 kubenswrapper[4031]: I0318 08:47:59.371627 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "f45955c7-5b5e-4172-8ba8-17f6f42ab94f" (UID: "f45955c7-5b5e-4172-8ba8-17f6f42ab94f"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:47:59.374318 master-0 kubenswrapper[4031]: I0318 08:47:59.371116 4031 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:59.374478 master-0 kubenswrapper[4031]: I0318 08:47:59.374456 4031 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-run-systemd\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:59.374615 master-0 kubenswrapper[4031]: I0318 08:47:59.374598 4031 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:59.374704 master-0 kubenswrapper[4031]: I0318 08:47:59.374690 4031 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:59.374782 master-0 kubenswrapper[4031]: I0318 08:47:59.374769 4031 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:59.374857 master-0 kubenswrapper[4031]: I0318 08:47:59.374844 4031 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-env-overrides\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:59.374928 master-0 kubenswrapper[4031]: I0318 08:47:59.374915 4031 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:59.375005 master-0 kubenswrapper[4031]: I0318 08:47:59.374992 4031 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:59.375093 master-0 kubenswrapper[4031]: I0318 08:47:59.375080 4031 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:59.375172 master-0 kubenswrapper[4031]: I0318 08:47:59.375158 4031 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-host-run-netns\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:59.376924 master-0 kubenswrapper[4031]: I0318 08:47:59.376855 4031 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-kube-api-access-rhmbk" (OuterVolumeSpecName: "kube-api-access-rhmbk") pod "f45955c7-5b5e-4172-8ba8-17f6f42ab94f" (UID: "f45955c7-5b5e-4172-8ba8-17f6f42ab94f"). InnerVolumeSpecName "kube-api-access-rhmbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:47:59.378392 master-0 kubenswrapper[4031]: I0318 08:47:59.378311 4031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-lf7kq" podStartSLOduration=1.548432092 podStartE2EDuration="23.378287216s" podCreationTimestamp="2026-03-18 08:47:36 +0000 UTC" firstStartedPulling="2026-03-18 08:47:37.148596249 +0000 UTC m=+72.078121289" lastFinishedPulling="2026-03-18 08:47:58.978451403 +0000 UTC m=+93.907976413" observedRunningTime="2026-03-18 08:47:59.345064379 +0000 UTC m=+94.274589399" watchObservedRunningTime="2026-03-18 08:47:59.378287216 +0000 UTC m=+94.307812266" Mar 18 08:47:59.475705 master-0 kubenswrapper[4031]: I0318 08:47:59.475632 4031 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 08:47:59.475705 master-0 kubenswrapper[4031]: I0318 08:47:59.475670 4031 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhmbk\" (UniqueName: \"kubernetes.io/projected/f45955c7-5b5e-4172-8ba8-17f6f42ab94f-kube-api-access-rhmbk\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:00.317336 master-0 kubenswrapper[4031]: I0318 08:48:00.317232 4031 generic.go:334] "Generic (PLEG): container finished" podID="fdd2f1fd-1a94-4f4e-a275-b075f432f763" containerID="8bbcbb7729919ddcb0aaf177e6b7da70bdb956a0c249d6fd8ccdc6cd23b74071" exitCode=0 Mar 18 08:48:00.317336 master-0 kubenswrapper[4031]: I0318 08:48:00.317307 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-68tmr" event={"ID":"fdd2f1fd-1a94-4f4e-a275-b075f432f763","Type":"ContainerDied","Data":"8bbcbb7729919ddcb0aaf177e6b7da70bdb956a0c249d6fd8ccdc6cd23b74071"} Mar 18 08:48:00.321688 master-0 kubenswrapper[4031]: I0318 08:48:00.321631 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" event={"ID":"f45955c7-5b5e-4172-8ba8-17f6f42ab94f","Type":"ContainerDied","Data":"9af47a1fce5f49f05d98ded301fb823e1f5cbb6403282d7c4e47623e10192f4e"} Mar 18 08:48:00.321841 master-0 kubenswrapper[4031]: I0318 08:48:00.321706 4031 scope.go:117] "RemoveContainer" containerID="ebc0e0f0f29deaa42b66d4757db6acb7bcb2013de10c6d0ece78be8a41d14a9a" Mar 18 08:48:00.322116 master-0 kubenswrapper[4031]: I0318 08:48:00.322052 4031 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gcsfv" Mar 18 08:48:00.383314 master-0 kubenswrapper[4031]: I0318 08:48:00.383206 4031 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gcsfv"] Mar 18 08:48:00.392765 master-0 kubenswrapper[4031]: I0318 08:48:00.392688 4031 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gcsfv"] Mar 18 08:48:00.401365 master-0 kubenswrapper[4031]: I0318 08:48:00.400007 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6ff5l"] Mar 18 08:48:00.401365 master-0 kubenswrapper[4031]: E0318 08:48:00.400265 4031 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f45955c7-5b5e-4172-8ba8-17f6f42ab94f" containerName="kubecfg-setup" Mar 18 08:48:00.401365 master-0 kubenswrapper[4031]: I0318 08:48:00.400295 4031 state_mem.go:107] "Deleted CPUSet assignment" podUID="f45955c7-5b5e-4172-8ba8-17f6f42ab94f" containerName="kubecfg-setup" Mar 18 08:48:00.401365 master-0 kubenswrapper[4031]: I0318 08:48:00.400383 4031 memory_manager.go:354] "RemoveStaleState removing state" podUID="f45955c7-5b5e-4172-8ba8-17f6f42ab94f" containerName="kubecfg-setup" Mar 18 08:48:00.404274 master-0 kubenswrapper[4031]: I0318 08:48:00.404225 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.408158 master-0 kubenswrapper[4031]: I0318 08:48:00.407985 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 08:48:00.408342 master-0 kubenswrapper[4031]: I0318 08:48:00.408256 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 08:48:00.483864 master-0 kubenswrapper[4031]: I0318 08:48:00.483793 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.483864 master-0 kubenswrapper[4031]: I0318 08:48:00.483863 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-systemd-units\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.484100 master-0 kubenswrapper[4031]: I0318 08:48:00.483902 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-systemd\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.484100 master-0 kubenswrapper[4031]: I0318 08:48:00.483937 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-var-lib-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.484100 master-0 kubenswrapper[4031]: I0318 08:48:00.483978 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-etc-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.484100 master-0 kubenswrapper[4031]: I0318 08:48:00.484021 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-run-netns\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.484100 master-0 kubenswrapper[4031]: I0318 08:48:00.484053 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovnkube-config\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.484100 master-0 kubenswrapper[4031]: I0318 08:48:00.484084 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-slash\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.484314 master-0 kubenswrapper[4031]: I0318 08:48:00.484116 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g97kq\" (UniqueName: \"kubernetes.io/projected/8dacdedc-c6ad-40d4-afdc-59a31be417fe-kube-api-access-g97kq\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.484314 master-0 kubenswrapper[4031]: I0318 08:48:00.484146 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-run-ovn-kubernetes\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.484314 master-0 kubenswrapper[4031]: I0318 08:48:00.484216 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-ovn\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.484314 master-0 kubenswrapper[4031]: I0318 08:48:00.484260 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.484314 master-0 kubenswrapper[4031]: I0318 08:48:00.484309 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-env-overrides\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.484496 master-0 kubenswrapper[4031]: I0318 08:48:00.484341 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovnkube-script-lib\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.484496 master-0 kubenswrapper[4031]: I0318 08:48:00.484372 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-cni-bin\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.484496 master-0 kubenswrapper[4031]: I0318 08:48:00.484402 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-cni-netd\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.484496 master-0 kubenswrapper[4031]: I0318 08:48:00.484456 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-log-socket\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.484738 master-0 kubenswrapper[4031]: I0318 08:48:00.484515 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-node-log\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.484738 master-0 kubenswrapper[4031]: I0318 08:48:00.484559 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovn-node-metrics-cert\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.484738 master-0 kubenswrapper[4031]: I0318 08:48:00.484612 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-kubelet\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.585152 master-0 kubenswrapper[4031]: I0318 08:48:00.585028 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-etc-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.585152 master-0 kubenswrapper[4031]: I0318 08:48:00.585103 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-var-lib-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.585152 master-0 kubenswrapper[4031]: I0318 08:48:00.585141 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-run-netns\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.585337 master-0 kubenswrapper[4031]: I0318 08:48:00.585181 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovnkube-config\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.585337 master-0 kubenswrapper[4031]: I0318 08:48:00.585213 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-slash\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.585337 master-0 kubenswrapper[4031]: I0318 08:48:00.585223 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-etc-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.585337 master-0 kubenswrapper[4031]: I0318 08:48:00.585298 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-run-netns\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.585337 master-0 kubenswrapper[4031]: I0318 08:48:00.585318 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g97kq\" (UniqueName: \"kubernetes.io/projected/8dacdedc-c6ad-40d4-afdc-59a31be417fe-kube-api-access-g97kq\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.585508 master-0 kubenswrapper[4031]: I0318 08:48:00.585450 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-slash\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.585548 master-0 kubenswrapper[4031]: I0318 08:48:00.585518 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-run-ovn-kubernetes\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.585622 master-0 kubenswrapper[4031]: I0318 08:48:00.585563 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-ovn\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.585707 master-0 kubenswrapper[4031]: I0318 08:48:00.585664 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-ovn\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.585707 master-0 kubenswrapper[4031]: I0318 08:48:00.585692 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-var-lib-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.585808 master-0 kubenswrapper[4031]: I0318 08:48:00.585755 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-run-ovn-kubernetes\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.585932 master-0 kubenswrapper[4031]: I0318 08:48:00.585847 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.585932 master-0 kubenswrapper[4031]: I0318 08:48:00.585902 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-env-overrides\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.586011 master-0 kubenswrapper[4031]: I0318 08:48:00.585949 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovnkube-script-lib\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.586117 master-0 kubenswrapper[4031]: I0318 08:48:00.586081 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovnkube-config\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.586179 master-0 kubenswrapper[4031]: I0318 08:48:00.586159 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-cni-bin\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.586221 master-0 kubenswrapper[4031]: I0318 08:48:00.586194 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-cni-netd\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.586265 master-0 kubenswrapper[4031]: I0318 08:48:00.586215 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-log-socket\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.586475 master-0 kubenswrapper[4031]: I0318 08:48:00.586321 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-cni-bin\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.586475 master-0 kubenswrapper[4031]: I0318 08:48:00.586446 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-node-log\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.586634 master-0 kubenswrapper[4031]: I0318 08:48:00.586498 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-node-log\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.586634 master-0 kubenswrapper[4031]: I0318 08:48:00.586524 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-kubelet\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.586634 master-0 kubenswrapper[4031]: I0318 08:48:00.586543 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-log-socket\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.586634 master-0 kubenswrapper[4031]: I0318 08:48:00.586603 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-cni-netd\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.586807 master-0 kubenswrapper[4031]: I0318 08:48:00.586674 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-kubelet\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.586807 master-0 kubenswrapper[4031]: I0318 08:48:00.586744 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.586879 master-0 kubenswrapper[4031]: I0318 08:48:00.586801 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovn-node-metrics-cert\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.586879 master-0 kubenswrapper[4031]: I0318 08:48:00.586858 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-systemd-units\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.587036 master-0 kubenswrapper[4031]: I0318 08:48:00.586903 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-systemd\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.587036 master-0 kubenswrapper[4031]: I0318 08:48:00.586949 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.587128 master-0 kubenswrapper[4031]: I0318 08:48:00.587054 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-env-overrides\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.587196 master-0 kubenswrapper[4031]: I0318 08:48:00.587147 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.587196 master-0 kubenswrapper[4031]: I0318 08:48:00.587147 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-systemd-units\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.587329 master-0 kubenswrapper[4031]: I0318 08:48:00.587200 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-systemd\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.587329 master-0 kubenswrapper[4031]: I0318 08:48:00.587235 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovnkube-script-lib\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.598362 master-0 kubenswrapper[4031]: I0318 08:48:00.598304 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovn-node-metrics-cert\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.614607 master-0 kubenswrapper[4031]: I0318 08:48:00.614533 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g97kq\" (UniqueName: \"kubernetes.io/projected/8dacdedc-c6ad-40d4-afdc-59a31be417fe-kube-api-access-g97kq\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.724321 master-0 kubenswrapper[4031]: I0318 08:48:00.724222 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:00.742733 master-0 kubenswrapper[4031]: W0318 08:48:00.742674 4031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dacdedc_c6ad_40d4_afdc_59a31be417fe.slice/crio-111cad7658297e20b09bb8a0322469d96e6944a07fa207a070a18f99b4bbfc85 WatchSource:0}: Error finding container 111cad7658297e20b09bb8a0322469d96e6944a07fa207a070a18f99b4bbfc85: Status 404 returned error can't find the container with id 111cad7658297e20b09bb8a0322469d96e6944a07fa207a070a18f99b4bbfc85 Mar 18 08:48:00.892426 master-0 kubenswrapper[4031]: I0318 08:48:00.892363 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:48:00.892743 master-0 kubenswrapper[4031]: I0318 08:48:00.892482 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:48:00.892743 master-0 kubenswrapper[4031]: E0318 08:48:00.892596 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7r2q2" podUID="f198f770-5483-4499-abb6-06026f2c6b37" Mar 18 08:48:00.893001 master-0 kubenswrapper[4031]: E0318 08:48:00.892759 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:48:01.330825 master-0 kubenswrapper[4031]: I0318 08:48:01.330708 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-68tmr" event={"ID":"fdd2f1fd-1a94-4f4e-a275-b075f432f763","Type":"ContainerStarted","Data":"405dd7ab87642e2ed7d21587ea51490bd02bb3a48fa38f6c61e020470a01ce38"} Mar 18 08:48:01.333903 master-0 kubenswrapper[4031]: I0318 08:48:01.333844 4031 generic.go:334] "Generic (PLEG): container finished" podID="8dacdedc-c6ad-40d4-afdc-59a31be417fe" containerID="ef703157d612ad5a33aedc987f4c2c3909390ffd8d83083c1d4a577646a22e59" exitCode=0 Mar 18 08:48:01.333903 master-0 kubenswrapper[4031]: I0318 08:48:01.333900 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" event={"ID":"8dacdedc-c6ad-40d4-afdc-59a31be417fe","Type":"ContainerDied","Data":"ef703157d612ad5a33aedc987f4c2c3909390ffd8d83083c1d4a577646a22e59"} Mar 18 08:48:01.334098 master-0 kubenswrapper[4031]: I0318 08:48:01.333933 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" event={"ID":"8dacdedc-c6ad-40d4-afdc-59a31be417fe","Type":"ContainerStarted","Data":"111cad7658297e20b09bb8a0322469d96e6944a07fa207a070a18f99b4bbfc85"} Mar 18 08:48:01.357889 master-0 kubenswrapper[4031]: I0318 08:48:01.357784 4031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-68tmr" podStartSLOduration=5.309214675 podStartE2EDuration="43.357752128s" podCreationTimestamp="2026-03-18 08:47:18 +0000 UTC" firstStartedPulling="2026-03-18 08:47:18.546279479 +0000 UTC m=+53.475804489" lastFinishedPulling="2026-03-18 08:47:56.594816892 +0000 UTC m=+91.524341942" observedRunningTime="2026-03-18 08:48:01.357135454 +0000 UTC m=+96.286660504" watchObservedRunningTime="2026-03-18 08:48:01.357752128 +0000 UTC m=+96.287277198" Mar 18 08:48:01.899030 master-0 kubenswrapper[4031]: I0318 08:48:01.898646 4031 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f45955c7-5b5e-4172-8ba8-17f6f42ab94f" path="/var/lib/kubelet/pods/f45955c7-5b5e-4172-8ba8-17f6f42ab94f/volumes" Mar 18 08:48:02.343157 master-0 kubenswrapper[4031]: I0318 08:48:02.343081 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" event={"ID":"8dacdedc-c6ad-40d4-afdc-59a31be417fe","Type":"ContainerStarted","Data":"19f9a244359e41237ffc1eb5935169ff73a476435fe3e06e60645f24c78cc443"} Mar 18 08:48:02.343882 master-0 kubenswrapper[4031]: I0318 08:48:02.343169 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" event={"ID":"8dacdedc-c6ad-40d4-afdc-59a31be417fe","Type":"ContainerStarted","Data":"90b245c549fc82293eef43fd0897883ad0afd966e8c6ac468d987e8240e7997d"} Mar 18 08:48:02.343882 master-0 kubenswrapper[4031]: I0318 08:48:02.343198 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" event={"ID":"8dacdedc-c6ad-40d4-afdc-59a31be417fe","Type":"ContainerStarted","Data":"ae75637b10f7c0b9537d10e9d31bc1f41e2b0d05bde118f4c86951cf1ae4c55a"} Mar 18 08:48:02.343882 master-0 kubenswrapper[4031]: I0318 08:48:02.343222 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" event={"ID":"8dacdedc-c6ad-40d4-afdc-59a31be417fe","Type":"ContainerStarted","Data":"9aa267e729cbb06e17046861a6cd211569a9bd38d25aa7676597b2081aca36a1"} Mar 18 08:48:02.343882 master-0 kubenswrapper[4031]: I0318 08:48:02.343250 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" event={"ID":"8dacdedc-c6ad-40d4-afdc-59a31be417fe","Type":"ContainerStarted","Data":"a320a5892b65ac6ffa7ea434b24865353f2d4a7b872a5d02823d9ae7e2c86257"} Mar 18 08:48:02.343882 master-0 kubenswrapper[4031]: I0318 08:48:02.343272 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" event={"ID":"8dacdedc-c6ad-40d4-afdc-59a31be417fe","Type":"ContainerStarted","Data":"c28a4a66266f04a75f823e0d7c00d96a8fa51be07b8c8b39b8be878e985001e0"} Mar 18 08:48:02.892165 master-0 kubenswrapper[4031]: I0318 08:48:02.892084 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:48:02.892165 master-0 kubenswrapper[4031]: I0318 08:48:02.892125 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:48:02.892498 master-0 kubenswrapper[4031]: E0318 08:48:02.892385 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7r2q2" podUID="f198f770-5483-4499-abb6-06026f2c6b37" Mar 18 08:48:02.892498 master-0 kubenswrapper[4031]: E0318 08:48:02.892482 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:48:02.906954 master-0 kubenswrapper[4031]: I0318 08:48:02.906895 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 18 08:48:04.356155 master-0 kubenswrapper[4031]: I0318 08:48:04.356081 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" event={"ID":"8dacdedc-c6ad-40d4-afdc-59a31be417fe","Type":"ContainerStarted","Data":"11242ff71ba2c7d68149d196973bfedb2fc51a39fa0f93d7fdc521ee4e58975d"} Mar 18 08:48:04.891889 master-0 kubenswrapper[4031]: I0318 08:48:04.891797 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:48:04.891889 master-0 kubenswrapper[4031]: I0318 08:48:04.891856 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:48:04.892124 master-0 kubenswrapper[4031]: E0318 08:48:04.891971 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7r2q2" podUID="f198f770-5483-4499-abb6-06026f2c6b37" Mar 18 08:48:04.892203 master-0 kubenswrapper[4031]: E0318 08:48:04.892151 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:48:05.533862 master-0 kubenswrapper[4031]: E0318 08:48:05.533763 4031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 18 08:48:05.533862 master-0 kubenswrapper[4031]: E0318 08:48:05.533817 4031 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 18 08:48:05.533862 master-0 kubenswrapper[4031]: E0318 08:48:05.533837 4031 projected.go:194] Error preparing data for projected volume kube-api-access-sk4w7 for pod openshift-network-diagnostics/network-check-target-7r2q2: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:48:05.534819 master-0 kubenswrapper[4031]: E0318 08:48:05.533923 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7 podName:f198f770-5483-4499-abb6-06026f2c6b37 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:37.533891546 +0000 UTC m=+132.463416596 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-sk4w7" (UniqueName: "kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7") pod "network-check-target-7r2q2" (UID: "f198f770-5483-4499-abb6-06026f2c6b37") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 18 08:48:05.534819 master-0 kubenswrapper[4031]: I0318 08:48:05.533554 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk4w7\" (UniqueName: \"kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7\") pod \"network-check-target-7r2q2\" (UID: \"f198f770-5483-4499-abb6-06026f2c6b37\") " pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:48:05.635467 master-0 kubenswrapper[4031]: I0318 08:48:05.635388 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:48:05.635806 master-0 kubenswrapper[4031]: E0318 08:48:05.635636 4031 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:48:05.635806 master-0 kubenswrapper[4031]: E0318 08:48:05.635769 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert podName:85d361a2-3f83-4857-b96e-3e98fcf33463 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:09.635732176 +0000 UTC m=+164.565257226 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert") pod "cluster-version-operator-56d8475767-t9zrr" (UID: "85d361a2-3f83-4857-b96e-3e98fcf33463") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:48:05.912775 master-0 kubenswrapper[4031]: I0318 08:48:05.912492 4031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=3.912465719 podStartE2EDuration="3.912465719s" podCreationTimestamp="2026-03-18 08:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:48:05.911734213 +0000 UTC m=+100.841259263" watchObservedRunningTime="2026-03-18 08:48:05.912465719 +0000 UTC m=+100.841990769" Mar 18 08:48:06.892423 master-0 kubenswrapper[4031]: I0318 08:48:06.891977 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:48:06.893331 master-0 kubenswrapper[4031]: I0318 08:48:06.892021 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:48:06.893331 master-0 kubenswrapper[4031]: E0318 08:48:06.892460 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:48:06.893331 master-0 kubenswrapper[4031]: E0318 08:48:06.892762 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7r2q2" podUID="f198f770-5483-4499-abb6-06026f2c6b37" Mar 18 08:48:07.373427 master-0 kubenswrapper[4031]: I0318 08:48:07.373321 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" event={"ID":"8dacdedc-c6ad-40d4-afdc-59a31be417fe","Type":"ContainerStarted","Data":"0867a9e7e5a32259bce8036fd0fed7273bab3b474ef4528dcb11709459343d85"} Mar 18 08:48:07.396471 master-0 kubenswrapper[4031]: I0318 08:48:07.396317 4031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" podStartSLOduration=7.396284253 podStartE2EDuration="7.396284253s" podCreationTimestamp="2026-03-18 08:48:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:48:07.395097926 +0000 UTC m=+102.324623006" watchObservedRunningTime="2026-03-18 08:48:07.396284253 +0000 UTC m=+102.325809323" Mar 18 08:48:08.377067 master-0 kubenswrapper[4031]: I0318 08:48:08.376966 4031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:08.377067 master-0 kubenswrapper[4031]: I0318 08:48:08.377056 4031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:08.377067 master-0 kubenswrapper[4031]: I0318 08:48:08.377082 4031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:08.407990 master-0 kubenswrapper[4031]: I0318 08:48:08.407930 4031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:08.410055 master-0 kubenswrapper[4031]: I0318 08:48:08.409991 4031 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:08.867204 master-0 kubenswrapper[4031]: I0318 08:48:08.866214 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2xs9n"] Mar 18 08:48:08.867204 master-0 kubenswrapper[4031]: I0318 08:48:08.866845 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:48:08.867595 master-0 kubenswrapper[4031]: E0318 08:48:08.867263 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:48:08.870343 master-0 kubenswrapper[4031]: I0318 08:48:08.870276 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-7r2q2"] Mar 18 08:48:08.870516 master-0 kubenswrapper[4031]: I0318 08:48:08.870417 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:48:08.870729 master-0 kubenswrapper[4031]: E0318 08:48:08.870559 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7r2q2" podUID="f198f770-5483-4499-abb6-06026f2c6b37" Mar 18 08:48:10.891925 master-0 kubenswrapper[4031]: I0318 08:48:10.891813 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:48:10.891925 master-0 kubenswrapper[4031]: I0318 08:48:10.891871 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:48:10.892884 master-0 kubenswrapper[4031]: E0318 08:48:10.891993 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7r2q2" podUID="f198f770-5483-4499-abb6-06026f2c6b37" Mar 18 08:48:10.892884 master-0 kubenswrapper[4031]: E0318 08:48:10.892217 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:48:13.020518 master-0 kubenswrapper[4031]: I0318 08:48:13.020431 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:48:13.021338 master-0 kubenswrapper[4031]: I0318 08:48:13.020480 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:48:13.021338 master-0 kubenswrapper[4031]: E0318 08:48:13.020850 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7r2q2" podUID="f198f770-5483-4499-abb6-06026f2c6b37" Mar 18 08:48:13.021338 master-0 kubenswrapper[4031]: E0318 08:48:13.020976 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:48:14.892530 master-0 kubenswrapper[4031]: I0318 08:48:14.892464 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:48:14.892530 master-0 kubenswrapper[4031]: I0318 08:48:14.892500 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:48:14.893286 master-0 kubenswrapper[4031]: E0318 08:48:14.892732 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:48:14.893286 master-0 kubenswrapper[4031]: E0318 08:48:14.892865 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-7r2q2" podUID="f198f770-5483-4499-abb6-06026f2c6b37" Mar 18 08:48:15.346924 master-0 kubenswrapper[4031]: I0318 08:48:15.346826 4031 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Mar 18 08:48:15.347188 master-0 kubenswrapper[4031]: I0318 08:48:15.347020 4031 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Mar 18 08:48:15.387825 master-0 kubenswrapper[4031]: I0318 08:48:15.387133 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx"] Mar 18 08:48:15.387825 master-0 kubenswrapper[4031]: I0318 08:48:15.387473 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" Mar 18 08:48:15.390717 master-0 kubenswrapper[4031]: I0318 08:48:15.390168 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 08:48:15.390717 master-0 kubenswrapper[4031]: I0318 08:48:15.390397 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 08:48:15.390991 master-0 kubenswrapper[4031]: I0318 08:48:15.390746 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 08:48:15.392611 master-0 kubenswrapper[4031]: I0318 08:48:15.391632 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 08:48:15.408772 master-0 kubenswrapper[4031]: I0318 08:48:15.408714 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf"] Mar 18 08:48:15.409119 master-0 kubenswrapper[4031]: I0318 08:48:15.409091 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-m862c"] Mar 18 08:48:15.409380 master-0 kubenswrapper[4031]: I0318 08:48:15.409333 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:15.409439 master-0 kubenswrapper[4031]: I0318 08:48:15.409403 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:15.410579 master-0 kubenswrapper[4031]: I0318 08:48:15.410545 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k"] Mar 18 08:48:15.411210 master-0 kubenswrapper[4031]: I0318 08:48:15.411193 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" Mar 18 08:48:15.412867 master-0 kubenswrapper[4031]: I0318 08:48:15.412813 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh"] Mar 18 08:48:15.413633 master-0 kubenswrapper[4031]: I0318 08:48:15.413557 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:15.414226 master-0 kubenswrapper[4031]: I0318 08:48:15.414192 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 08:48:15.414396 master-0 kubenswrapper[4031]: I0318 08:48:15.414380 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 08:48:15.414518 master-0 kubenswrapper[4031]: I0318 08:48:15.414383 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc"] Mar 18 08:48:15.423991 master-0 kubenswrapper[4031]: I0318 08:48:15.423811 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:15.424190 master-0 kubenswrapper[4031]: I0318 08:48:15.424159 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-lhcpp"] Mar 18 08:48:15.424781 master-0 kubenswrapper[4031]: I0318 08:48:15.424743 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-lhcpp" Mar 18 08:48:15.426733 master-0 kubenswrapper[4031]: I0318 08:48:15.426702 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27"] Mar 18 08:48:15.427514 master-0 kubenswrapper[4031]: I0318 08:48:15.427494 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:48:15.430495 master-0 kubenswrapper[4031]: I0318 08:48:15.429049 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 08:48:15.430495 master-0 kubenswrapper[4031]: I0318 08:48:15.429240 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 08:48:15.430495 master-0 kubenswrapper[4031]: I0318 08:48:15.429241 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 08:48:15.442314 master-0 kubenswrapper[4031]: I0318 08:48:15.438631 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 08:48:15.442314 master-0 kubenswrapper[4031]: I0318 08:48:15.440975 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl"] Mar 18 08:48:15.442314 master-0 kubenswrapper[4031]: I0318 08:48:15.441388 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5"] Mar 18 08:48:15.442314 master-0 kubenswrapper[4031]: I0318 08:48:15.441423 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 08:48:15.442314 master-0 kubenswrapper[4031]: I0318 08:48:15.441658 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 08:48:15.442314 master-0 kubenswrapper[4031]: I0318 08:48:15.441703 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 08:48:15.442314 master-0 kubenswrapper[4031]: I0318 08:48:15.441795 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 08:48:15.442314 master-0 kubenswrapper[4031]: I0318 08:48:15.441837 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:48:15.442314 master-0 kubenswrapper[4031]: I0318 08:48:15.441947 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 08:48:15.442314 master-0 kubenswrapper[4031]: I0318 08:48:15.442101 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 08:48:15.442314 master-0 kubenswrapper[4031]: I0318 08:48:15.442189 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" Mar 18 08:48:15.442871 master-0 kubenswrapper[4031]: I0318 08:48:15.442453 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp"] Mar 18 08:48:15.442871 master-0 kubenswrapper[4031]: I0318 08:48:15.442864 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:15.446115 master-0 kubenswrapper[4031]: I0318 08:48:15.446089 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 08:48:15.446930 master-0 kubenswrapper[4031]: I0318 08:48:15.446890 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 08:48:15.447146 master-0 kubenswrapper[4031]: I0318 08:48:15.447118 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p"] Mar 18 08:48:15.447583 master-0 kubenswrapper[4031]: I0318 08:48:15.447547 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" Mar 18 08:48:15.451168 master-0 kubenswrapper[4031]: I0318 08:48:15.449687 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-2649q"] Mar 18 08:48:15.451168 master-0 kubenswrapper[4031]: I0318 08:48:15.450348 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 08:48:15.451168 master-0 kubenswrapper[4031]: I0318 08:48:15.450494 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r"] Mar 18 08:48:15.452560 master-0 kubenswrapper[4031]: I0318 08:48:15.452525 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9"] Mar 18 08:48:15.452893 master-0 kubenswrapper[4031]: I0318 08:48:15.452871 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" Mar 18 08:48:15.453047 master-0 kubenswrapper[4031]: I0318 08:48:15.453010 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc"] Mar 18 08:48:15.453516 master-0 kubenswrapper[4031]: I0318 08:48:15.453493 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 08:48:15.453654 master-0 kubenswrapper[4031]: I0318 08:48:15.453626 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" Mar 18 08:48:15.456407 master-0 kubenswrapper[4031]: I0318 08:48:15.455742 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l"] Mar 18 08:48:15.456407 master-0 kubenswrapper[4031]: I0318 08:48:15.456237 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:48:15.456407 master-0 kubenswrapper[4031]: I0318 08:48:15.456352 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 08:48:15.457277 master-0 kubenswrapper[4031]: I0318 08:48:15.456722 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq"] Mar 18 08:48:15.457277 master-0 kubenswrapper[4031]: I0318 08:48:15.457142 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:48:15.458369 master-0 kubenswrapper[4031]: I0318 08:48:15.457902 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9"] Mar 18 08:48:15.463634 master-0 kubenswrapper[4031]: I0318 08:48:15.459446 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" Mar 18 08:48:15.463634 master-0 kubenswrapper[4031]: I0318 08:48:15.463026 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 08:48:15.463848 master-0 kubenswrapper[4031]: I0318 08:48:15.463647 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 08:48:15.463890 master-0 kubenswrapper[4031]: I0318 08:48:15.463877 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 08:48:15.464105 master-0 kubenswrapper[4031]: I0318 08:48:15.464076 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 08:48:15.464279 master-0 kubenswrapper[4031]: I0318 08:48:15.464252 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 08:48:15.464451 master-0 kubenswrapper[4031]: I0318 08:48:15.464425 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 08:48:15.464665 master-0 kubenswrapper[4031]: I0318 08:48:15.464637 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 08:48:15.464849 master-0 kubenswrapper[4031]: I0318 08:48:15.464821 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 08:48:15.482140 master-0 kubenswrapper[4031]: I0318 08:48:15.475231 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 08:48:15.482140 master-0 kubenswrapper[4031]: I0318 08:48:15.475814 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 08:48:15.482140 master-0 kubenswrapper[4031]: I0318 08:48:15.475908 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 08:48:15.482140 master-0 kubenswrapper[4031]: I0318 08:48:15.476438 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 08:48:15.482140 master-0 kubenswrapper[4031]: I0318 08:48:15.476507 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 08:48:15.482140 master-0 kubenswrapper[4031]: I0318 08:48:15.476857 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx"] Mar 18 08:48:15.482140 master-0 kubenswrapper[4031]: I0318 08:48:15.477138 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 08:48:15.482140 master-0 kubenswrapper[4031]: I0318 08:48:15.477405 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 08:48:15.482140 master-0 kubenswrapper[4031]: I0318 08:48:15.477603 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 08:48:15.482140 master-0 kubenswrapper[4031]: I0318 08:48:15.477653 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 08:48:15.482140 master-0 kubenswrapper[4031]: I0318 08:48:15.478013 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 08:48:15.482140 master-0 kubenswrapper[4031]: I0318 08:48:15.478175 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 08:48:15.482140 master-0 kubenswrapper[4031]: I0318 08:48:15.478242 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 08:48:15.482140 master-0 kubenswrapper[4031]: I0318 08:48:15.478517 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 08:48:15.482140 master-0 kubenswrapper[4031]: I0318 08:48:15.478603 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 08:48:15.482140 master-0 kubenswrapper[4031]: I0318 08:48:15.478643 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 08:48:15.482140 master-0 kubenswrapper[4031]: I0318 08:48:15.478875 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 08:48:15.482140 master-0 kubenswrapper[4031]: I0318 08:48:15.479017 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 08:48:15.482140 master-0 kubenswrapper[4031]: I0318 08:48:15.479234 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 08:48:15.482140 master-0 kubenswrapper[4031]: I0318 08:48:15.479497 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 08:48:15.482140 master-0 kubenswrapper[4031]: I0318 08:48:15.480429 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 08:48:15.482140 master-0 kubenswrapper[4031]: I0318 08:48:15.481085 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 08:48:15.483099 master-0 kubenswrapper[4031]: I0318 08:48:15.482141 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 08:48:15.488185 master-0 kubenswrapper[4031]: I0318 08:48:15.484558 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr"] Mar 18 08:48:15.509328 master-0 kubenswrapper[4031]: I0318 08:48:15.509281 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:15.509856 master-0 kubenswrapper[4031]: I0318 08:48:15.509817 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 08:48:15.511095 master-0 kubenswrapper[4031]: I0318 08:48:15.511061 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 08:48:15.515601 master-0 kubenswrapper[4031]: I0318 08:48:15.512340 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr"] Mar 18 08:48:15.515601 master-0 kubenswrapper[4031]: I0318 08:48:15.513090 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 08:48:15.515601 master-0 kubenswrapper[4031]: I0318 08:48:15.513463 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl"] Mar 18 08:48:15.515601 master-0 kubenswrapper[4031]: I0318 08:48:15.513816 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:15.515601 master-0 kubenswrapper[4031]: I0318 08:48:15.514122 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-m862c"] Mar 18 08:48:15.515601 master-0 kubenswrapper[4031]: I0318 08:48:15.514163 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" Mar 18 08:48:15.515601 master-0 kubenswrapper[4031]: I0318 08:48:15.515100 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf"] Mar 18 08:48:15.523618 master-0 kubenswrapper[4031]: I0318 08:48:15.519576 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 08:48:15.523618 master-0 kubenswrapper[4031]: I0318 08:48:15.519910 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 08:48:15.523618 master-0 kubenswrapper[4031]: I0318 08:48:15.520246 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 08:48:15.523618 master-0 kubenswrapper[4031]: I0318 08:48:15.520457 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 08:48:15.523618 master-0 kubenswrapper[4031]: I0318 08:48:15.520615 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 08:48:15.523618 master-0 kubenswrapper[4031]: I0318 08:48:15.521076 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 08:48:15.523618 master-0 kubenswrapper[4031]: I0318 08:48:15.523557 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k"] Mar 18 08:48:15.523618 master-0 kubenswrapper[4031]: I0318 08:48:15.523613 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-lhcpp"] Mar 18 08:48:15.525334 master-0 kubenswrapper[4031]: I0318 08:48:15.525293 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27"] Mar 18 08:48:15.525416 master-0 kubenswrapper[4031]: I0318 08:48:15.525354 4031 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-vr4gq"] Mar 18 08:48:15.525557 master-0 kubenswrapper[4031]: I0318 08:48:15.525531 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 08:48:15.525774 master-0 kubenswrapper[4031]: I0318 08:48:15.525753 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 08:48:15.525903 master-0 kubenswrapper[4031]: I0318 08:48:15.525868 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 08:48:15.525972 master-0 kubenswrapper[4031]: I0318 08:48:15.525952 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 08:48:15.526016 master-0 kubenswrapper[4031]: I0318 08:48:15.525975 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 08:48:15.526495 master-0 kubenswrapper[4031]: I0318 08:48:15.526159 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 08:48:15.526495 master-0 kubenswrapper[4031]: I0318 08:48:15.526245 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 08:48:15.526495 master-0 kubenswrapper[4031]: I0318 08:48:15.526167 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 08:48:15.526495 master-0 kubenswrapper[4031]: I0318 08:48:15.526393 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 08:48:15.526495 master-0 kubenswrapper[4031]: I0318 08:48:15.526413 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 08:48:15.526495 master-0 kubenswrapper[4031]: I0318 08:48:15.526395 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 08:48:15.526495 master-0 kubenswrapper[4031]: I0318 08:48:15.526436 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 08:48:15.526495 master-0 kubenswrapper[4031]: I0318 08:48:15.526448 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 08:48:15.527077 master-0 kubenswrapper[4031]: I0318 08:48:15.526867 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 08:48:15.527077 master-0 kubenswrapper[4031]: I0318 08:48:15.526976 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 08:48:15.527077 master-0 kubenswrapper[4031]: I0318 08:48:15.527055 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr"] Mar 18 08:48:15.527186 master-0 kubenswrapper[4031]: I0318 08:48:15.527106 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 08:48:15.527186 master-0 kubenswrapper[4031]: I0318 08:48:15.527117 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 08:48:15.527186 master-0 kubenswrapper[4031]: I0318 08:48:15.527178 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 08:48:15.527288 master-0 kubenswrapper[4031]: I0318 08:48:15.527212 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 08:48:15.527602 master-0 kubenswrapper[4031]: I0318 08:48:15.527551 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl"] Mar 18 08:48:15.529938 master-0 kubenswrapper[4031]: I0318 08:48:15.529324 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9"] Mar 18 08:48:15.529938 master-0 kubenswrapper[4031]: I0318 08:48:15.529360 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp"] Mar 18 08:48:15.529938 master-0 kubenswrapper[4031]: I0318 08:48:15.529370 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9"] Mar 18 08:48:15.530235 master-0 kubenswrapper[4031]: I0318 08:48:15.530141 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 08:48:15.532093 master-0 kubenswrapper[4031]: I0318 08:48:15.530771 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc"] Mar 18 08:48:15.532093 master-0 kubenswrapper[4031]: I0318 08:48:15.531437 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr"] Mar 18 08:48:15.532093 master-0 kubenswrapper[4031]: I0318 08:48:15.531892 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r"] Mar 18 08:48:15.533147 master-0 kubenswrapper[4031]: I0318 08:48:15.532681 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh"] Mar 18 08:48:15.533731 master-0 kubenswrapper[4031]: I0318 08:48:15.533243 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-2649q"] Mar 18 08:48:15.533915 master-0 kubenswrapper[4031]: I0318 08:48:15.533883 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l"] Mar 18 08:48:15.534626 master-0 kubenswrapper[4031]: I0318 08:48:15.534598 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl"] Mar 18 08:48:15.535206 master-0 kubenswrapper[4031]: I0318 08:48:15.535180 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p"] Mar 18 08:48:15.539832 master-0 kubenswrapper[4031]: I0318 08:48:15.539802 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-config\") pod \"openshift-apiserver-operator-d65958b8-m8p9p\" (UID: \"81eefe1b-f683-4740-8fb0-0a5050f9b4a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" Mar 18 08:48:15.539900 master-0 kubenswrapper[4031]: I0318 08:48:15.539844 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:15.539900 master-0 kubenswrapper[4031]: I0318 08:48:15.539867 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-m8p9p\" (UID: \"81eefe1b-f683-4740-8fb0-0a5050f9b4a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" Mar 18 08:48:15.539900 master-0 kubenswrapper[4031]: I0318 08:48:15.539884 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert\") pod \"olm-operator-5c9796789-twp27\" (UID: \"c00ee838-424f-482b-942f-08f0952a5ccd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:48:15.539987 master-0 kubenswrapper[4031]: I0318 08:48:15.539908 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkkcv\" (UniqueName: \"kubernetes.io/projected/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-kube-api-access-qkkcv\") pod \"openshift-apiserver-operator-d65958b8-m8p9p\" (UID: \"81eefe1b-f683-4740-8fb0-0a5050f9b4a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" Mar 18 08:48:15.539987 master-0 kubenswrapper[4031]: I0318 08:48:15.539932 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:15.539987 master-0 kubenswrapper[4031]: I0318 08:48:15.539953 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65cff83a-8d8f-4e4f-96ef-99941c29ba53-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-pp4r9\" (UID: \"65cff83a-8d8f-4e4f-96ef-99941c29ba53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" Mar 18 08:48:15.539987 master-0 kubenswrapper[4031]: I0318 08:48:15.539970 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkmb4\" (UniqueName: \"kubernetes.io/projected/1deb139f-1903-417e-835c-28abdd156cdb-kube-api-access-dkmb4\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:15.540255 master-0 kubenswrapper[4031]: I0318 08:48:15.539993 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:15.540255 master-0 kubenswrapper[4031]: I0318 08:48:15.540014 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp84d\" (UniqueName: \"kubernetes.io/projected/4192ea44-a38c-4b70-93c3-8070da2ffe2f-kube-api-access-gp84d\") pod \"dns-operator-9c5679d8f-2649q\" (UID: \"4192ea44-a38c-4b70-93c3-8070da2ffe2f\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 08:48:15.540255 master-0 kubenswrapper[4031]: I0318 08:48:15.540034 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:15.540255 master-0 kubenswrapper[4031]: I0318 08:48:15.540068 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:15.540255 master-0 kubenswrapper[4031]: I0318 08:48:15.540088 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w4w9\" (UniqueName: \"kubernetes.io/projected/c00ee838-424f-482b-942f-08f0952a5ccd-kube-api-access-9w4w9\") pod \"olm-operator-5c9796789-twp27\" (UID: \"c00ee838-424f-482b-942f-08f0952a5ccd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:48:15.540255 master-0 kubenswrapper[4031]: I0318 08:48:15.540109 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/09269324-c908-474d-818f-5cd49406f1e2-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:15.540255 master-0 kubenswrapper[4031]: I0318 08:48:15.540127 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-xlfrc\" (UID: \"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" Mar 18 08:48:15.540255 master-0 kubenswrapper[4031]: I0318 08:48:15.540238 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq"] Mar 18 08:48:15.540485 master-0 kubenswrapper[4031]: I0318 08:48:15.540250 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:15.540485 master-0 kubenswrapper[4031]: I0318 08:48:15.540323 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65cff83a-8d8f-4e4f-96ef-99941c29ba53-config\") pod \"kube-apiserver-operator-8b68b9d9b-pp4r9\" (UID: \"65cff83a-8d8f-4e4f-96ef-99941c29ba53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" Mar 18 08:48:15.540485 master-0 kubenswrapper[4031]: I0318 08:48:15.540356 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkw45\" (UniqueName: \"kubernetes.io/projected/2d0da6e3-3887-4361-8eae-e7447f9ff72c-kube-api-access-xkw45\") pod \"package-server-manager-7b95f86987-k6xp5\" (UID: \"2d0da6e3-3887-4361-8eae-e7447f9ff72c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:48:15.540485 master-0 kubenswrapper[4031]: I0318 08:48:15.540420 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:15.540485 master-0 kubenswrapper[4031]: I0318 08:48:15.540447 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-25rbq\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:48:15.540663 master-0 kubenswrapper[4031]: I0318 08:48:15.540483 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnfwv\" (UniqueName: \"kubernetes.io/projected/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-kube-api-access-lnfwv\") pod \"openshift-controller-manager-operator-8c94f4649-2g6x9\" (UID: \"0f6a7f55-84bd-4ea5-8248-4cb565904c3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" Mar 18 08:48:15.540663 master-0 kubenswrapper[4031]: I0318 08:48:15.540536 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf7a3329-a04c-4b58-9364-b907c00cbe08-bound-sa-token\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:15.540663 master-0 kubenswrapper[4031]: I0318 08:48:15.540558 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f827195-f68d-4bd2-865b-a1f041a5c73e-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-6gj8k\" (UID: \"5f827195-f68d-4bd2-865b-a1f041a5c73e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" Mar 18 08:48:15.540663 master-0 kubenswrapper[4031]: I0318 08:48:15.540597 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-config\") pod \"service-ca-operator-b865698dc-fhlfx\" (UID: \"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" Mar 18 08:48:15.540663 master-0 kubenswrapper[4031]: I0318 08:48:15.540642 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-257nx\" (UniqueName: \"kubernetes.io/projected/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-kube-api-access-257nx\") pod \"catalog-operator-68f85b4d6c-hhn7l\" (UID: \"f6833a48-fccb-42bd-ac90-29f08d5bf7e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:48:15.540814 master-0 kubenswrapper[4031]: I0318 08:48:15.540664 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c56e1ac-8752-4e46-8692-93716087f0e0-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:15.540814 master-0 kubenswrapper[4031]: I0318 08:48:15.540731 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/5f827195-f68d-4bd2-865b-a1f041a5c73e-operand-assets\") pod \"cluster-olm-operator-67dcd4998-6gj8k\" (UID: \"5f827195-f68d-4bd2-865b-a1f041a5c73e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" Mar 18 08:48:15.540814 master-0 kubenswrapper[4031]: I0318 08:48:15.540756 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmv75\" (UniqueName: \"kubernetes.io/projected/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-kube-api-access-nmv75\") pod \"service-ca-operator-b865698dc-fhlfx\" (UID: \"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" Mar 18 08:48:15.540814 master-0 kubenswrapper[4031]: I0318 08:48:15.540775 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx9dd\" (UniqueName: \"kubernetes.io/projected/ca9d4694-8675-47c5-819f-89bba9dcdc0f-kube-api-access-rx9dd\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:15.540991 master-0 kubenswrapper[4031]: I0318 08:48:15.540831 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-2g6x9\" (UID: \"0f6a7f55-84bd-4ea5-8248-4cb565904c3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" Mar 18 08:48:15.540991 master-0 kubenswrapper[4031]: I0318 08:48:15.540854 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-config\") pod \"openshift-controller-manager-operator-8c94f4649-2g6x9\" (UID: \"0f6a7f55-84bd-4ea5-8248-4cb565904c3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" Mar 18 08:48:15.540991 master-0 kubenswrapper[4031]: I0318 08:48:15.540895 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:15.540991 master-0 kubenswrapper[4031]: I0318 08:48:15.540918 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1deb139f-1903-417e-835c-28abdd156cdb-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:15.540991 master-0 kubenswrapper[4031]: I0318 08:48:15.540942 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77sfj\" (UniqueName: \"kubernetes.io/projected/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-kube-api-access-77sfj\") pod \"multus-admission-controller-5dbbb8b86f-25rbq\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:48:15.540991 master-0 kubenswrapper[4031]: I0318 08:48:15.540979 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptdsp\" (UniqueName: \"kubernetes.io/projected/e86268c9-7a83-4ccb-979a-feff00cb4b3e-kube-api-access-ptdsp\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:15.541164 master-0 kubenswrapper[4031]: I0318 08:48:15.541000 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jndvw\" (UniqueName: \"kubernetes.io/projected/5f827195-f68d-4bd2-865b-a1f041a5c73e-kube-api-access-jndvw\") pod \"cluster-olm-operator-67dcd4998-6gj8k\" (UID: \"5f827195-f68d-4bd2-865b-a1f041a5c73e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" Mar 18 08:48:15.541164 master-0 kubenswrapper[4031]: I0318 08:48:15.541020 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmztj\" (UniqueName: \"kubernetes.io/projected/be2682e4-cb63-4102-a83e-ef28023e273a-kube-api-access-nmztj\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl\" (UID: \"be2682e4-cb63-4102-a83e-ef28023e273a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" Mar 18 08:48:15.541164 master-0 kubenswrapper[4031]: I0318 08:48:15.541043 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:15.541164 master-0 kubenswrapper[4031]: I0318 08:48:15.541077 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e86268c9-7a83-4ccb-979a-feff00cb4b3e-serving-cert\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:15.541164 master-0 kubenswrapper[4031]: I0318 08:48:15.541102 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be2682e4-cb63-4102-a83e-ef28023e273a-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl\" (UID: \"be2682e4-cb63-4102-a83e-ef28023e273a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" Mar 18 08:48:15.541164 master-0 kubenswrapper[4031]: I0318 08:48:15.541153 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert\") pod \"catalog-operator-68f85b4d6c-hhn7l\" (UID: \"f6833a48-fccb-42bd-ac90-29f08d5bf7e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:48:15.541320 master-0 kubenswrapper[4031]: I0318 08:48:15.541178 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c56e1ac-8752-4e46-8692-93716087f0e0-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:15.541320 master-0 kubenswrapper[4031]: I0318 08:48:15.541196 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rppm6\" (UniqueName: \"kubernetes.io/projected/6c56e1ac-8752-4e46-8692-93716087f0e0-kube-api-access-rppm6\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:15.541320 master-0 kubenswrapper[4031]: I0318 08:48:15.541220 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2plvj\" (UniqueName: \"kubernetes.io/projected/bf7a3329-a04c-4b58-9364-b907c00cbe08-kube-api-access-2plvj\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:15.541320 master-0 kubenswrapper[4031]: I0318 08:48:15.541289 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-xlfrc\" (UID: \"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" Mar 18 08:48:15.541320 master-0 kubenswrapper[4031]: I0318 08:48:15.541319 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-k6xp5\" (UID: \"2d0da6e3-3887-4361-8eae-e7447f9ff72c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:48:15.541448 master-0 kubenswrapper[4031]: I0318 08:48:15.541339 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65cff83a-8d8f-4e4f-96ef-99941c29ba53-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-pp4r9\" (UID: \"65cff83a-8d8f-4e4f-96ef-99941c29ba53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" Mar 18 08:48:15.541448 master-0 kubenswrapper[4031]: I0318 08:48:15.541364 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-serving-cert\") pod \"service-ca-operator-b865698dc-fhlfx\" (UID: \"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" Mar 18 08:48:15.541448 master-0 kubenswrapper[4031]: I0318 08:48:15.541381 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94zpt\" (UniqueName: \"kubernetes.io/projected/09269324-c908-474d-818f-5cd49406f1e2-kube-api-access-94zpt\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:15.541448 master-0 kubenswrapper[4031]: I0318 08:48:15.541407 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/95143c61-6f91-4cd4-9411-31c2fb75d4d0-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-whh6r\" (UID: \"95143c61-6f91-4cd4-9411-31c2fb75d4d0\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 08:48:15.541448 master-0 kubenswrapper[4031]: I0318 08:48:15.541432 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-config\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:15.541596 master-0 kubenswrapper[4031]: I0318 08:48:15.541449 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t9rq\" (UniqueName: \"kubernetes.io/projected/95143c61-6f91-4cd4-9411-31c2fb75d4d0-kube-api-access-8t9rq\") pod \"openshift-config-operator-95bf4f4d-whh6r\" (UID: \"95143c61-6f91-4cd4-9411-31c2fb75d4d0\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 08:48:15.541596 master-0 kubenswrapper[4031]: I0318 08:48:15.541467 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-config\") pod \"kube-controller-manager-operator-ff989d6cc-xlfrc\" (UID: \"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" Mar 18 08:48:15.541596 master-0 kubenswrapper[4031]: I0318 08:48:15.541532 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm2rc\" (UniqueName: \"kubernetes.io/projected/c5c995cf-40a0-4cd6-87fa-96a522f7bc57-kube-api-access-rm2rc\") pod \"csi-snapshot-controller-operator-5f5d689c6b-lhcpp\" (UID: \"c5c995cf-40a0-4cd6-87fa-96a522f7bc57\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-lhcpp" Mar 18 08:48:15.541596 master-0 kubenswrapper[4031]: I0318 08:48:15.541556 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf7a3329-a04c-4b58-9364-b907c00cbe08-trusted-ca\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:15.541699 master-0 kubenswrapper[4031]: I0318 08:48:15.541617 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2682e4-cb63-4102-a83e-ef28023e273a-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl\" (UID: \"be2682e4-cb63-4102-a83e-ef28023e273a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" Mar 18 08:48:15.541699 master-0 kubenswrapper[4031]: I0318 08:48:15.541640 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls\") pod \"dns-operator-9c5679d8f-2649q\" (UID: \"4192ea44-a38c-4b70-93c3-8070da2ffe2f\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 08:48:15.541699 master-0 kubenswrapper[4031]: I0318 08:48:15.541679 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95143c61-6f91-4cd4-9411-31c2fb75d4d0-serving-cert\") pod \"openshift-config-operator-95bf4f4d-whh6r\" (UID: \"95143c61-6f91-4cd4-9411-31c2fb75d4d0\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 08:48:15.541816 master-0 kubenswrapper[4031]: I0318 08:48:15.541798 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5"] Mar 18 08:48:15.543489 master-0 kubenswrapper[4031]: I0318 08:48:15.543459 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc"] Mar 18 08:48:15.642106 master-0 kubenswrapper[4031]: I0318 08:48:15.642078 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:15.642302 master-0 kubenswrapper[4031]: I0318 08:48:15.642286 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w4w9\" (UniqueName: \"kubernetes.io/projected/c00ee838-424f-482b-942f-08f0952a5ccd-kube-api-access-9w4w9\") pod \"olm-operator-5c9796789-twp27\" (UID: \"c00ee838-424f-482b-942f-08f0952a5ccd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:48:15.642375 master-0 kubenswrapper[4031]: I0318 08:48:15.642362 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:15.642452 master-0 kubenswrapper[4031]: I0318 08:48:15.642441 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/09269324-c908-474d-818f-5cd49406f1e2-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:15.642511 master-0 kubenswrapper[4031]: I0318 08:48:15.642500 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-xlfrc\" (UID: \"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" Mar 18 08:48:15.642608 master-0 kubenswrapper[4031]: I0318 08:48:15.642595 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-config\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:15.642672 master-0 kubenswrapper[4031]: I0318 08:48:15.642661 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:15.642747 master-0 kubenswrapper[4031]: I0318 08:48:15.642735 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-cpbdr\" (UID: \"0f9ba06c-7a6b-4f46-a747-80b0a0b58600\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" Mar 18 08:48:15.642812 master-0 kubenswrapper[4031]: I0318 08:48:15.642800 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkw45\" (UniqueName: \"kubernetes.io/projected/2d0da6e3-3887-4361-8eae-e7447f9ff72c-kube-api-access-xkw45\") pod \"package-server-manager-7b95f86987-k6xp5\" (UID: \"2d0da6e3-3887-4361-8eae-e7447f9ff72c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:48:15.642876 master-0 kubenswrapper[4031]: I0318 08:48:15.642864 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9mh7\" (UniqueName: \"kubernetes.io/projected/600c92a1-56c5-497b-a8f0-746830f4180e-kube-api-access-m9mh7\") pod \"iptables-alerter-vr4gq\" (UID: \"600c92a1-56c5-497b-a8f0-746830f4180e\") " pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 08:48:15.642943 master-0 kubenswrapper[4031]: I0318 08:48:15.642931 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65cff83a-8d8f-4e4f-96ef-99941c29ba53-config\") pod \"kube-apiserver-operator-8b68b9d9b-pp4r9\" (UID: \"65cff83a-8d8f-4e4f-96ef-99941c29ba53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" Mar 18 08:48:15.643007 master-0 kubenswrapper[4031]: I0318 08:48:15.642996 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-25rbq\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:48:15.643068 master-0 kubenswrapper[4031]: I0318 08:48:15.643058 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnfwv\" (UniqueName: \"kubernetes.io/projected/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-kube-api-access-lnfwv\") pod \"openshift-controller-manager-operator-8c94f4649-2g6x9\" (UID: \"0f6a7f55-84bd-4ea5-8248-4cb565904c3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" Mar 18 08:48:15.643127 master-0 kubenswrapper[4031]: I0318 08:48:15.643116 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:15.643193 master-0 kubenswrapper[4031]: I0318 08:48:15.643180 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f827195-f68d-4bd2-865b-a1f041a5c73e-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-6gj8k\" (UID: \"5f827195-f68d-4bd2-865b-a1f041a5c73e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" Mar 18 08:48:15.643256 master-0 kubenswrapper[4031]: I0318 08:48:15.643244 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-config\") pod \"service-ca-operator-b865698dc-fhlfx\" (UID: \"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" Mar 18 08:48:15.643318 master-0 kubenswrapper[4031]: I0318 08:48:15.643306 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf7a3329-a04c-4b58-9364-b907c00cbe08-bound-sa-token\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:15.643383 master-0 kubenswrapper[4031]: I0318 08:48:15.643372 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-257nx\" (UniqueName: \"kubernetes.io/projected/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-kube-api-access-257nx\") pod \"catalog-operator-68f85b4d6c-hhn7l\" (UID: \"f6833a48-fccb-42bd-ac90-29f08d5bf7e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:48:15.643429 master-0 kubenswrapper[4031]: E0318 08:48:15.642323 4031 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:15.643516 master-0 kubenswrapper[4031]: E0318 08:48:15.643507 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls podName:bf7a3329-a04c-4b58-9364-b907c00cbe08 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:16.143491433 +0000 UTC m=+111.073016443 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls") pod "ingress-operator-66b84d69b-4cxfh" (UID: "bf7a3329-a04c-4b58-9364-b907c00cbe08") : secret "metrics-tls" not found Mar 18 08:48:15.643833 master-0 kubenswrapper[4031]: E0318 08:48:15.642687 4031 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 08:48:15.643929 master-0 kubenswrapper[4031]: E0318 08:48:15.643919 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls podName:1deb139f-1903-417e-835c-28abdd156cdb nodeName:}" failed. No retries permitted until 2026-03-18 08:48:16.143908112 +0000 UTC m=+111.073433112 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-9s8lp" (UID: "1deb139f-1903-417e-835c-28abdd156cdb") : secret "node-tuning-operator-tls" not found Mar 18 08:48:15.643991 master-0 kubenswrapper[4031]: I0318 08:48:15.643970 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/09269324-c908-474d-818f-5cd49406f1e2-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:15.644074 master-0 kubenswrapper[4031]: I0318 08:48:15.644045 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-config\") pod \"service-ca-operator-b865698dc-fhlfx\" (UID: \"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" Mar 18 08:48:15.644152 master-0 kubenswrapper[4031]: E0318 08:48:15.644112 4031 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 08:48:15.644189 master-0 kubenswrapper[4031]: E0318 08:48:15.644178 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs podName:7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac nodeName:}" failed. No retries permitted until 2026-03-18 08:48:16.144164948 +0000 UTC m=+111.073689958 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-25rbq" (UID: "7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac") : secret "multus-admission-controller-secret" not found Mar 18 08:48:15.644231 master-0 kubenswrapper[4031]: I0318 08:48:15.644213 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c56e1ac-8752-4e46-8692-93716087f0e0-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:15.644265 master-0 kubenswrapper[4031]: I0318 08:48:15.644239 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-serving-cert\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:15.644265 master-0 kubenswrapper[4031]: I0318 08:48:15.644256 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:15.644322 master-0 kubenswrapper[4031]: I0318 08:48:15.644296 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/5f827195-f68d-4bd2-865b-a1f041a5c73e-operand-assets\") pod \"cluster-olm-operator-67dcd4998-6gj8k\" (UID: \"5f827195-f68d-4bd2-865b-a1f041a5c73e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" Mar 18 08:48:15.644322 master-0 kubenswrapper[4031]: I0318 08:48:15.644315 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmv75\" (UniqueName: \"kubernetes.io/projected/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-kube-api-access-nmv75\") pod \"service-ca-operator-b865698dc-fhlfx\" (UID: \"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" Mar 18 08:48:15.644374 master-0 kubenswrapper[4031]: I0318 08:48:15.644359 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx9dd\" (UniqueName: \"kubernetes.io/projected/ca9d4694-8675-47c5-819f-89bba9dcdc0f-kube-api-access-rx9dd\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:15.644400 master-0 kubenswrapper[4031]: I0318 08:48:15.644380 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-2g6x9\" (UID: \"0f6a7f55-84bd-4ea5-8248-4cb565904c3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" Mar 18 08:48:15.644400 master-0 kubenswrapper[4031]: I0318 08:48:15.644397 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-config\") pod \"openshift-controller-manager-operator-8c94f4649-2g6x9\" (UID: \"0f6a7f55-84bd-4ea5-8248-4cb565904c3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" Mar 18 08:48:15.644585 master-0 kubenswrapper[4031]: I0318 08:48:15.644554 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/600c92a1-56c5-497b-a8f0-746830f4180e-iptables-alerter-script\") pod \"iptables-alerter-vr4gq\" (UID: \"600c92a1-56c5-497b-a8f0-746830f4180e\") " pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 08:48:15.644649 master-0 kubenswrapper[4031]: I0318 08:48:15.644603 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65cff83a-8d8f-4e4f-96ef-99941c29ba53-config\") pod \"kube-apiserver-operator-8b68b9d9b-pp4r9\" (UID: \"65cff83a-8d8f-4e4f-96ef-99941c29ba53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" Mar 18 08:48:15.644737 master-0 kubenswrapper[4031]: I0318 08:48:15.644705 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/5f827195-f68d-4bd2-865b-a1f041a5c73e-operand-assets\") pod \"cluster-olm-operator-67dcd4998-6gj8k\" (UID: \"5f827195-f68d-4bd2-865b-a1f041a5c73e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" Mar 18 08:48:15.645012 master-0 kubenswrapper[4031]: E0318 08:48:15.644959 4031 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:15.645061 master-0 kubenswrapper[4031]: I0318 08:48:15.645012 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:15.645128 master-0 kubenswrapper[4031]: E0318 08:48:15.645104 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert podName:1deb139f-1903-417e-835c-28abdd156cdb nodeName:}" failed. No retries permitted until 2026-03-18 08:48:16.145063178 +0000 UTC m=+111.074588228 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-9s8lp" (UID: "1deb139f-1903-417e-835c-28abdd156cdb") : secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:15.645270 master-0 kubenswrapper[4031]: I0318 08:48:15.645236 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1deb139f-1903-417e-835c-28abdd156cdb-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:15.645363 master-0 kubenswrapper[4031]: E0318 08:48:15.645339 4031 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:15.645527 master-0 kubenswrapper[4031]: E0318 08:48:15.645501 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls podName:09269324-c908-474d-818f-5cd49406f1e2 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:16.145481087 +0000 UTC m=+111.075006107 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-8vfjr" (UID: "09269324-c908-474d-818f-5cd49406f1e2") : secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:15.645696 master-0 kubenswrapper[4031]: I0318 08:48:15.645634 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77sfj\" (UniqueName: \"kubernetes.io/projected/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-kube-api-access-77sfj\") pod \"multus-admission-controller-5dbbb8b86f-25rbq\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:48:15.646148 master-0 kubenswrapper[4031]: I0318 08:48:15.646112 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-config\") pod \"openshift-controller-manager-operator-8c94f4649-2g6x9\" (UID: \"0f6a7f55-84bd-4ea5-8248-4cb565904c3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" Mar 18 08:48:15.646196 master-0 kubenswrapper[4031]: I0318 08:48:15.646171 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdsp\" (UniqueName: \"kubernetes.io/projected/e86268c9-7a83-4ccb-979a-feff00cb4b3e-kube-api-access-ptdsp\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:15.646305 master-0 kubenswrapper[4031]: I0318 08:48:15.646291 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:15.646371 master-0 kubenswrapper[4031]: I0318 08:48:15.646295 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jndvw\" (UniqueName: \"kubernetes.io/projected/5f827195-f68d-4bd2-865b-a1f041a5c73e-kube-api-access-jndvw\") pod \"cluster-olm-operator-67dcd4998-6gj8k\" (UID: \"5f827195-f68d-4bd2-865b-a1f041a5c73e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" Mar 18 08:48:15.646445 master-0 kubenswrapper[4031]: I0318 08:48:15.646433 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e86268c9-7a83-4ccb-979a-feff00cb4b3e-serving-cert\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:15.646520 master-0 kubenswrapper[4031]: I0318 08:48:15.646507 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmztj\" (UniqueName: \"kubernetes.io/projected/be2682e4-cb63-4102-a83e-ef28023e273a-kube-api-access-nmztj\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl\" (UID: \"be2682e4-cb63-4102-a83e-ef28023e273a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" Mar 18 08:48:15.646632 master-0 kubenswrapper[4031]: I0318 08:48:15.646621 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:15.646713 master-0 kubenswrapper[4031]: I0318 08:48:15.646701 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-ca\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:15.646781 master-0 kubenswrapper[4031]: I0318 08:48:15.646770 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be2682e4-cb63-4102-a83e-ef28023e273a-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl\" (UID: \"be2682e4-cb63-4102-a83e-ef28023e273a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" Mar 18 08:48:15.646842 master-0 kubenswrapper[4031]: I0318 08:48:15.646832 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert\") pod \"catalog-operator-68f85b4d6c-hhn7l\" (UID: \"f6833a48-fccb-42bd-ac90-29f08d5bf7e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:48:15.646920 master-0 kubenswrapper[4031]: I0318 08:48:15.646908 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/600c92a1-56c5-497b-a8f0-746830f4180e-host-slash\") pod \"iptables-alerter-vr4gq\" (UID: \"600c92a1-56c5-497b-a8f0-746830f4180e\") " pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 08:48:15.646991 master-0 kubenswrapper[4031]: I0318 08:48:15.646979 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c56e1ac-8752-4e46-8692-93716087f0e0-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:15.656030 master-0 kubenswrapper[4031]: I0318 08:48:15.649782 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rppm6\" (UniqueName: \"kubernetes.io/projected/6c56e1ac-8752-4e46-8692-93716087f0e0-kube-api-access-rppm6\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:15.656030 master-0 kubenswrapper[4031]: I0318 08:48:15.649837 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2plvj\" (UniqueName: \"kubernetes.io/projected/bf7a3329-a04c-4b58-9364-b907c00cbe08-kube-api-access-2plvj\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:15.656030 master-0 kubenswrapper[4031]: I0318 08:48:15.649868 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-client\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:15.656030 master-0 kubenswrapper[4031]: I0318 08:48:15.649903 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-cpbdr\" (UID: \"0f9ba06c-7a6b-4f46-a747-80b0a0b58600\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" Mar 18 08:48:15.656030 master-0 kubenswrapper[4031]: I0318 08:48:15.649947 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-xlfrc\" (UID: \"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" Mar 18 08:48:15.656030 master-0 kubenswrapper[4031]: I0318 08:48:15.649979 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-k6xp5\" (UID: \"2d0da6e3-3887-4361-8eae-e7447f9ff72c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:48:15.656030 master-0 kubenswrapper[4031]: I0318 08:48:15.650005 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94zpt\" (UniqueName: \"kubernetes.io/projected/09269324-c908-474d-818f-5cd49406f1e2-kube-api-access-94zpt\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:15.656030 master-0 kubenswrapper[4031]: I0318 08:48:15.650031 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/95143c61-6f91-4cd4-9411-31c2fb75d4d0-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-whh6r\" (UID: \"95143c61-6f91-4cd4-9411-31c2fb75d4d0\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 08:48:15.656030 master-0 kubenswrapper[4031]: I0318 08:48:15.650056 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-config\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:15.656030 master-0 kubenswrapper[4031]: I0318 08:48:15.650077 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65cff83a-8d8f-4e4f-96ef-99941c29ba53-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-pp4r9\" (UID: \"65cff83a-8d8f-4e4f-96ef-99941c29ba53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" Mar 18 08:48:15.656030 master-0 kubenswrapper[4031]: E0318 08:48:15.650086 4031 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 08:48:15.656030 master-0 kubenswrapper[4031]: I0318 08:48:15.650100 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-serving-cert\") pod \"service-ca-operator-b865698dc-fhlfx\" (UID: \"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" Mar 18 08:48:15.656030 master-0 kubenswrapper[4031]: I0318 08:48:15.650130 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm2rc\" (UniqueName: \"kubernetes.io/projected/c5c995cf-40a0-4cd6-87fa-96a522f7bc57-kube-api-access-rm2rc\") pod \"csi-snapshot-controller-operator-5f5d689c6b-lhcpp\" (UID: \"c5c995cf-40a0-4cd6-87fa-96a522f7bc57\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-lhcpp" Mar 18 08:48:15.656030 master-0 kubenswrapper[4031]: E0318 08:48:15.650181 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert podName:f6833a48-fccb-42bd-ac90-29f08d5bf7e8 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:16.150131752 +0000 UTC m=+111.079656862 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert") pod "catalog-operator-68f85b4d6c-hhn7l" (UID: "f6833a48-fccb-42bd-ac90-29f08d5bf7e8") : secret "catalog-operator-serving-cert" not found Mar 18 08:48:15.656030 master-0 kubenswrapper[4031]: I0318 08:48:15.650213 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t9rq\" (UniqueName: \"kubernetes.io/projected/95143c61-6f91-4cd4-9411-31c2fb75d4d0-kube-api-access-8t9rq\") pod \"openshift-config-operator-95bf4f4d-whh6r\" (UID: \"95143c61-6f91-4cd4-9411-31c2fb75d4d0\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 08:48:15.656501 master-0 kubenswrapper[4031]: I0318 08:48:15.650303 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-config\") pod \"kube-controller-manager-operator-ff989d6cc-xlfrc\" (UID: \"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" Mar 18 08:48:15.656501 master-0 kubenswrapper[4031]: I0318 08:48:15.650371 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf7a3329-a04c-4b58-9364-b907c00cbe08-trusted-ca\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:15.656501 master-0 kubenswrapper[4031]: I0318 08:48:15.650401 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95143c61-6f91-4cd4-9411-31c2fb75d4d0-serving-cert\") pod \"openshift-config-operator-95bf4f4d-whh6r\" (UID: \"95143c61-6f91-4cd4-9411-31c2fb75d4d0\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 08:48:15.656501 master-0 kubenswrapper[4031]: I0318 08:48:15.650855 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1deb139f-1903-417e-835c-28abdd156cdb-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:15.656501 master-0 kubenswrapper[4031]: E0318 08:48:15.653147 4031 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 08:48:15.656501 master-0 kubenswrapper[4031]: E0318 08:48:15.653196 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert podName:2d0da6e3-3887-4361-8eae-e7447f9ff72c nodeName:}" failed. No retries permitted until 2026-03-18 08:48:16.153180201 +0000 UTC m=+111.082705211 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-k6xp5" (UID: "2d0da6e3-3887-4361-8eae-e7447f9ff72c") : secret "package-server-manager-serving-cert" not found Mar 18 08:48:15.668740 master-0 kubenswrapper[4031]: I0318 08:48:15.668708 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2682e4-cb63-4102-a83e-ef28023e273a-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl\" (UID: \"be2682e4-cb63-4102-a83e-ef28023e273a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" Mar 18 08:48:15.668839 master-0 kubenswrapper[4031]: I0318 08:48:15.668759 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls\") pod \"dns-operator-9c5679d8f-2649q\" (UID: \"4192ea44-a38c-4b70-93c3-8070da2ffe2f\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 08:48:15.668839 master-0 kubenswrapper[4031]: I0318 08:48:15.668788 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-config\") pod \"openshift-kube-scheduler-operator-dddff6458-cpbdr\" (UID: \"0f9ba06c-7a6b-4f46-a747-80b0a0b58600\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" Mar 18 08:48:15.668839 master-0 kubenswrapper[4031]: I0318 08:48:15.668817 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:15.668839 master-0 kubenswrapper[4031]: I0318 08:48:15.668840 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-m8p9p\" (UID: \"81eefe1b-f683-4740-8fb0-0a5050f9b4a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" Mar 18 08:48:15.670346 master-0 kubenswrapper[4031]: I0318 08:48:15.668863 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-config\") pod \"openshift-apiserver-operator-d65958b8-m8p9p\" (UID: \"81eefe1b-f683-4740-8fb0-0a5050f9b4a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" Mar 18 08:48:15.670346 master-0 kubenswrapper[4031]: I0318 08:48:15.668891 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert\") pod \"olm-operator-5c9796789-twp27\" (UID: \"c00ee838-424f-482b-942f-08f0952a5ccd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:48:15.670346 master-0 kubenswrapper[4031]: I0318 08:48:15.668917 4031 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2mj5\" (UniqueName: \"kubernetes.io/projected/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-kube-api-access-f2mj5\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:15.670346 master-0 kubenswrapper[4031]: I0318 08:48:15.668945 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkkcv\" (UniqueName: \"kubernetes.io/projected/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-kube-api-access-qkkcv\") pod \"openshift-apiserver-operator-d65958b8-m8p9p\" (UID: \"81eefe1b-f683-4740-8fb0-0a5050f9b4a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" Mar 18 08:48:15.670346 master-0 kubenswrapper[4031]: I0318 08:48:15.668970 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:15.670346 master-0 kubenswrapper[4031]: I0318 08:48:15.668993 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65cff83a-8d8f-4e4f-96ef-99941c29ba53-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-pp4r9\" (UID: \"65cff83a-8d8f-4e4f-96ef-99941c29ba53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" Mar 18 08:48:15.670346 master-0 kubenswrapper[4031]: I0318 08:48:15.669018 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp84d\" (UniqueName: \"kubernetes.io/projected/4192ea44-a38c-4b70-93c3-8070da2ffe2f-kube-api-access-gp84d\") pod \"dns-operator-9c5679d8f-2649q\" (UID: \"4192ea44-a38c-4b70-93c3-8070da2ffe2f\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 08:48:15.670346 master-0 kubenswrapper[4031]: I0318 08:48:15.669043 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkmb4\" (UniqueName: \"kubernetes.io/projected/1deb139f-1903-417e-835c-28abdd156cdb-kube-api-access-dkmb4\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:15.670346 master-0 kubenswrapper[4031]: I0318 08:48:15.669067 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:15.670346 master-0 kubenswrapper[4031]: E0318 08:48:15.669200 4031 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 08:48:15.670346 master-0 kubenswrapper[4031]: E0318 08:48:15.669252 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls podName:6c56e1ac-8752-4e46-8692-93716087f0e0 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:16.169230912 +0000 UTC m=+111.098755932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-c4lgf" (UID: "6c56e1ac-8752-4e46-8692-93716087f0e0") : secret "image-registry-operator-tls" not found Mar 18 08:48:15.670346 master-0 kubenswrapper[4031]: E0318 08:48:15.669605 4031 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:15.670346 master-0 kubenswrapper[4031]: E0318 08:48:15.669695 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls podName:4192ea44-a38c-4b70-93c3-8070da2ffe2f nodeName:}" failed. No retries permitted until 2026-03-18 08:48:16.169670801 +0000 UTC m=+111.099195831 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls") pod "dns-operator-9c5679d8f-2649q" (UID: "4192ea44-a38c-4b70-93c3-8070da2ffe2f") : secret "metrics-tls" not found Mar 18 08:48:15.670346 master-0 kubenswrapper[4031]: E0318 08:48:15.670053 4031 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 08:48:15.670346 master-0 kubenswrapper[4031]: E0318 08:48:15.670086 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics podName:ca9d4694-8675-47c5-819f-89bba9dcdc0f nodeName:}" failed. No retries permitted until 2026-03-18 08:48:16.170076101 +0000 UTC m=+111.099601121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-m862c" (UID: "ca9d4694-8675-47c5-819f-89bba9dcdc0f") : secret "marketplace-operator-metrics" not found Mar 18 08:48:15.671094 master-0 kubenswrapper[4031]: I0318 08:48:15.670443 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f827195-f68d-4bd2-865b-a1f041a5c73e-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-6gj8k\" (UID: \"5f827195-f68d-4bd2-865b-a1f041a5c73e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" Mar 18 08:48:15.671094 master-0 kubenswrapper[4031]: I0318 08:48:15.670461 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c56e1ac-8752-4e46-8692-93716087f0e0-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:15.671094 master-0 kubenswrapper[4031]: E0318 08:48:15.670544 4031 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 08:48:15.671094 master-0 kubenswrapper[4031]: E0318 08:48:15.670595 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert podName:c00ee838-424f-482b-942f-08f0952a5ccd nodeName:}" failed. No retries permitted until 2026-03-18 08:48:16.170584172 +0000 UTC m=+111.100109202 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert") pod "olm-operator-5c9796789-twp27" (UID: "c00ee838-424f-482b-942f-08f0952a5ccd") : secret "olm-operator-serving-cert" not found Mar 18 08:48:15.671094 master-0 kubenswrapper[4031]: I0318 08:48:15.670654 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-config\") pod \"openshift-apiserver-operator-d65958b8-m8p9p\" (UID: \"81eefe1b-f683-4740-8fb0-0a5050f9b4a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" Mar 18 08:48:15.671590 master-0 kubenswrapper[4031]: I0318 08:48:15.671539 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-config\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:15.673138 master-0 kubenswrapper[4031]: I0318 08:48:15.673078 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-2g6x9\" (UID: \"0f6a7f55-84bd-4ea5-8248-4cb565904c3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" Mar 18 08:48:15.673239 master-0 kubenswrapper[4031]: I0318 08:48:15.673198 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be2682e4-cb63-4102-a83e-ef28023e273a-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl\" (UID: \"be2682e4-cb63-4102-a83e-ef28023e273a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" Mar 18 08:48:15.673275 master-0 kubenswrapper[4031]: I0318 08:48:15.673255 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-config\") pod \"kube-controller-manager-operator-ff989d6cc-xlfrc\" (UID: \"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" Mar 18 08:48:15.673696 master-0 kubenswrapper[4031]: I0318 08:48:15.673673 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:15.675059 master-0 kubenswrapper[4031]: I0318 08:48:15.674942 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf7a3329-a04c-4b58-9364-b907c00cbe08-trusted-ca\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:15.679086 master-0 kubenswrapper[4031]: I0318 08:48:15.678742 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx9dd\" (UniqueName: \"kubernetes.io/projected/ca9d4694-8675-47c5-819f-89bba9dcdc0f-kube-api-access-rx9dd\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:15.679232 master-0 kubenswrapper[4031]: I0318 08:48:15.679198 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf7a3329-a04c-4b58-9364-b907c00cbe08-bound-sa-token\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:15.679299 master-0 kubenswrapper[4031]: I0318 08:48:15.679279 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e86268c9-7a83-4ccb-979a-feff00cb4b3e-serving-cert\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:15.679853 master-0 kubenswrapper[4031]: I0318 08:48:15.679836 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/95143c61-6f91-4cd4-9411-31c2fb75d4d0-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-whh6r\" (UID: \"95143c61-6f91-4cd4-9411-31c2fb75d4d0\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 08:48:15.680960 master-0 kubenswrapper[4031]: I0318 08:48:15.680927 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-serving-cert\") pod \"service-ca-operator-b865698dc-fhlfx\" (UID: \"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" Mar 18 08:48:15.681296 master-0 kubenswrapper[4031]: I0318 08:48:15.681265 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65cff83a-8d8f-4e4f-96ef-99941c29ba53-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-pp4r9\" (UID: \"65cff83a-8d8f-4e4f-96ef-99941c29ba53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" Mar 18 08:48:15.681332 master-0 kubenswrapper[4031]: I0318 08:48:15.681268 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkw45\" (UniqueName: \"kubernetes.io/projected/2d0da6e3-3887-4361-8eae-e7447f9ff72c-kube-api-access-xkw45\") pod \"package-server-manager-7b95f86987-k6xp5\" (UID: \"2d0da6e3-3887-4361-8eae-e7447f9ff72c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:48:15.681968 master-0 kubenswrapper[4031]: I0318 08:48:15.681934 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-xlfrc\" (UID: \"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" Mar 18 08:48:15.682399 master-0 kubenswrapper[4031]: I0318 08:48:15.682366 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c56e1ac-8752-4e46-8692-93716087f0e0-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:15.682855 master-0 kubenswrapper[4031]: I0318 08:48:15.682826 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmv75\" (UniqueName: \"kubernetes.io/projected/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-kube-api-access-nmv75\") pod \"service-ca-operator-b865698dc-fhlfx\" (UID: \"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" Mar 18 08:48:15.684720 master-0 kubenswrapper[4031]: I0318 08:48:15.684677 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jndvw\" (UniqueName: \"kubernetes.io/projected/5f827195-f68d-4bd2-865b-a1f041a5c73e-kube-api-access-jndvw\") pod \"cluster-olm-operator-67dcd4998-6gj8k\" (UID: \"5f827195-f68d-4bd2-865b-a1f041a5c73e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" Mar 18 08:48:15.685218 master-0 kubenswrapper[4031]: I0318 08:48:15.685201 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95143c61-6f91-4cd4-9411-31c2fb75d4d0-serving-cert\") pod \"openshift-config-operator-95bf4f4d-whh6r\" (UID: \"95143c61-6f91-4cd4-9411-31c2fb75d4d0\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 08:48:15.685320 master-0 kubenswrapper[4031]: I0318 08:48:15.685286 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-xlfrc\" (UID: \"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" Mar 18 08:48:15.685581 master-0 kubenswrapper[4031]: I0318 08:48:15.685548 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-m8p9p\" (UID: \"81eefe1b-f683-4740-8fb0-0a5050f9b4a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" Mar 18 08:48:15.686029 master-0 kubenswrapper[4031]: I0318 08:48:15.686002 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptdsp\" (UniqueName: \"kubernetes.io/projected/e86268c9-7a83-4ccb-979a-feff00cb4b3e-kube-api-access-ptdsp\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:15.686934 master-0 kubenswrapper[4031]: I0318 08:48:15.686897 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:15.688304 master-0 kubenswrapper[4031]: I0318 08:48:15.688279 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm2rc\" (UniqueName: \"kubernetes.io/projected/c5c995cf-40a0-4cd6-87fa-96a522f7bc57-kube-api-access-rm2rc\") pod \"csi-snapshot-controller-operator-5f5d689c6b-lhcpp\" (UID: \"c5c995cf-40a0-4cd6-87fa-96a522f7bc57\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-lhcpp" Mar 18 08:48:15.689023 master-0 kubenswrapper[4031]: I0318 08:48:15.688979 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2682e4-cb63-4102-a83e-ef28023e273a-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl\" (UID: \"be2682e4-cb63-4102-a83e-ef28023e273a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" Mar 18 08:48:15.690653 master-0 kubenswrapper[4031]: I0318 08:48:15.690622 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w4w9\" (UniqueName: \"kubernetes.io/projected/c00ee838-424f-482b-942f-08f0952a5ccd-kube-api-access-9w4w9\") pod \"olm-operator-5c9796789-twp27\" (UID: \"c00ee838-424f-482b-942f-08f0952a5ccd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:48:15.691184 master-0 kubenswrapper[4031]: I0318 08:48:15.691148 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmztj\" (UniqueName: \"kubernetes.io/projected/be2682e4-cb63-4102-a83e-ef28023e273a-kube-api-access-nmztj\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl\" (UID: \"be2682e4-cb63-4102-a83e-ef28023e273a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" Mar 18 08:48:15.691243 master-0 kubenswrapper[4031]: I0318 08:48:15.691204 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-257nx\" (UniqueName: \"kubernetes.io/projected/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-kube-api-access-257nx\") pod \"catalog-operator-68f85b4d6c-hhn7l\" (UID: \"f6833a48-fccb-42bd-ac90-29f08d5bf7e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:48:15.692252 master-0 kubenswrapper[4031]: I0318 08:48:15.692233 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t9rq\" (UniqueName: \"kubernetes.io/projected/95143c61-6f91-4cd4-9411-31c2fb75d4d0-kube-api-access-8t9rq\") pod \"openshift-config-operator-95bf4f4d-whh6r\" (UID: \"95143c61-6f91-4cd4-9411-31c2fb75d4d0\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 08:48:15.694677 master-0 kubenswrapper[4031]: I0318 08:48:15.694647 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnfwv\" (UniqueName: \"kubernetes.io/projected/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-kube-api-access-lnfwv\") pod \"openshift-controller-manager-operator-8c94f4649-2g6x9\" (UID: \"0f6a7f55-84bd-4ea5-8248-4cb565904c3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" Mar 18 08:48:15.705442 master-0 kubenswrapper[4031]: I0318 08:48:15.705396 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77sfj\" (UniqueName: \"kubernetes.io/projected/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-kube-api-access-77sfj\") pod \"multus-admission-controller-5dbbb8b86f-25rbq\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:48:15.708937 master-0 kubenswrapper[4031]: I0318 08:48:15.708906 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rppm6\" (UniqueName: \"kubernetes.io/projected/6c56e1ac-8752-4e46-8692-93716087f0e0-kube-api-access-rppm6\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:15.723467 master-0 kubenswrapper[4031]: I0318 08:48:15.723435 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 08:48:15.729130 master-0 kubenswrapper[4031]: I0318 08:48:15.728487 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2plvj\" (UniqueName: \"kubernetes.io/projected/bf7a3329-a04c-4b58-9364-b907c00cbe08-kube-api-access-2plvj\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:15.743325 master-0 kubenswrapper[4031]: I0318 08:48:15.743176 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94zpt\" (UniqueName: \"kubernetes.io/projected/09269324-c908-474d-818f-5cd49406f1e2-kube-api-access-94zpt\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:15.744752 master-0 kubenswrapper[4031]: I0318 08:48:15.744186 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" Mar 18 08:48:15.765418 master-0 kubenswrapper[4031]: I0318 08:48:15.765131 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65cff83a-8d8f-4e4f-96ef-99941c29ba53-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-pp4r9\" (UID: \"65cff83a-8d8f-4e4f-96ef-99941c29ba53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" Mar 18 08:48:15.769866 master-0 kubenswrapper[4031]: I0318 08:48:15.769831 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-ca\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:15.770068 master-0 kubenswrapper[4031]: I0318 08:48:15.770034 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/600c92a1-56c5-497b-a8f0-746830f4180e-host-slash\") pod \"iptables-alerter-vr4gq\" (UID: \"600c92a1-56c5-497b-a8f0-746830f4180e\") " pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 08:48:15.770133 master-0 kubenswrapper[4031]: I0318 08:48:15.770076 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-client\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:15.770133 master-0 kubenswrapper[4031]: I0318 08:48:15.770098 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-cpbdr\" (UID: \"0f9ba06c-7a6b-4f46-a747-80b0a0b58600\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" Mar 18 08:48:15.770209 master-0 kubenswrapper[4031]: I0318 08:48:15.770198 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-config\") pod \"openshift-kube-scheduler-operator-dddff6458-cpbdr\" (UID: \"0f9ba06c-7a6b-4f46-a747-80b0a0b58600\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" Mar 18 08:48:15.770245 master-0 kubenswrapper[4031]: I0318 08:48:15.770229 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2mj5\" (UniqueName: \"kubernetes.io/projected/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-kube-api-access-f2mj5\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:15.770402 master-0 kubenswrapper[4031]: I0318 08:48:15.770379 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/600c92a1-56c5-497b-a8f0-746830f4180e-host-slash\") pod \"iptables-alerter-vr4gq\" (UID: \"600c92a1-56c5-497b-a8f0-746830f4180e\") " pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 08:48:15.770468 master-0 kubenswrapper[4031]: I0318 08:48:15.770403 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-cpbdr\" (UID: \"0f9ba06c-7a6b-4f46-a747-80b0a0b58600\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" Mar 18 08:48:15.770468 master-0 kubenswrapper[4031]: I0318 08:48:15.770433 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-config\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:15.770468 master-0 kubenswrapper[4031]: I0318 08:48:15.770459 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9mh7\" (UniqueName: \"kubernetes.io/projected/600c92a1-56c5-497b-a8f0-746830f4180e-kube-api-access-m9mh7\") pod \"iptables-alerter-vr4gq\" (UID: \"600c92a1-56c5-497b-a8f0-746830f4180e\") " pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 08:48:15.770559 master-0 kubenswrapper[4031]: I0318 08:48:15.770512 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-serving-cert\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:15.770620 master-0 kubenswrapper[4031]: I0318 08:48:15.770581 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:15.770700 master-0 kubenswrapper[4031]: I0318 08:48:15.770679 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/600c92a1-56c5-497b-a8f0-746830f4180e-iptables-alerter-script\") pod \"iptables-alerter-vr4gq\" (UID: \"600c92a1-56c5-497b-a8f0-746830f4180e\") " pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 08:48:15.772072 master-0 kubenswrapper[4031]: I0318 08:48:15.772047 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-config\") pod \"openshift-kube-scheduler-operator-dddff6458-cpbdr\" (UID: \"0f9ba06c-7a6b-4f46-a747-80b0a0b58600\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" Mar 18 08:48:15.772155 master-0 kubenswrapper[4031]: I0318 08:48:15.772076 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:15.772381 master-0 kubenswrapper[4031]: I0318 08:48:15.772360 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-ca\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:15.773205 master-0 kubenswrapper[4031]: I0318 08:48:15.773179 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-config\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:15.773624 master-0 kubenswrapper[4031]: I0318 08:48:15.773603 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/600c92a1-56c5-497b-a8f0-746830f4180e-iptables-alerter-script\") pod \"iptables-alerter-vr4gq\" (UID: \"600c92a1-56c5-497b-a8f0-746830f4180e\") " pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 08:48:15.775949 master-0 kubenswrapper[4031]: I0318 08:48:15.775911 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-serving-cert\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:15.775949 master-0 kubenswrapper[4031]: I0318 08:48:15.775938 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-cpbdr\" (UID: \"0f9ba06c-7a6b-4f46-a747-80b0a0b58600\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" Mar 18 08:48:15.776192 master-0 kubenswrapper[4031]: I0318 08:48:15.776151 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-client\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:15.785023 master-0 kubenswrapper[4031]: I0318 08:48:15.784980 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp84d\" (UniqueName: \"kubernetes.io/projected/4192ea44-a38c-4b70-93c3-8070da2ffe2f-kube-api-access-gp84d\") pod \"dns-operator-9c5679d8f-2649q\" (UID: \"4192ea44-a38c-4b70-93c3-8070da2ffe2f\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 08:48:15.810529 master-0 kubenswrapper[4031]: I0318 08:48:15.810473 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkmb4\" (UniqueName: \"kubernetes.io/projected/1deb139f-1903-417e-835c-28abdd156cdb-kube-api-access-dkmb4\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:15.839584 master-0 kubenswrapper[4031]: I0318 08:48:15.837332 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkkcv\" (UniqueName: \"kubernetes.io/projected/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-kube-api-access-qkkcv\") pod \"openshift-apiserver-operator-d65958b8-m8p9p\" (UID: \"81eefe1b-f683-4740-8fb0-0a5050f9b4a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" Mar 18 08:48:15.854093 master-0 kubenswrapper[4031]: I0318 08:48:15.842871 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" Mar 18 08:48:15.865672 master-0 kubenswrapper[4031]: I0318 08:48:15.865265 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:15.868360 master-0 kubenswrapper[4031]: I0318 08:48:15.868314 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2mj5\" (UniqueName: \"kubernetes.io/projected/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-kube-api-access-f2mj5\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:15.871913 master-0 kubenswrapper[4031]: I0318 08:48:15.871884 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-lhcpp" Mar 18 08:48:15.889364 master-0 kubenswrapper[4031]: I0318 08:48:15.889313 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-cpbdr\" (UID: \"0f9ba06c-7a6b-4f46-a747-80b0a0b58600\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" Mar 18 08:48:15.895344 master-0 kubenswrapper[4031]: I0318 08:48:15.893854 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" Mar 18 08:48:15.912783 master-0 kubenswrapper[4031]: I0318 08:48:15.907817 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" Mar 18 08:48:15.912783 master-0 kubenswrapper[4031]: I0318 08:48:15.908676 4031 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9mh7\" (UniqueName: \"kubernetes.io/projected/600c92a1-56c5-497b-a8f0-746830f4180e-kube-api-access-m9mh7\") pod \"iptables-alerter-vr4gq\" (UID: \"600c92a1-56c5-497b-a8f0-746830f4180e\") " pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 08:48:15.926203 master-0 kubenswrapper[4031]: I0318 08:48:15.925807 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" Mar 18 08:48:15.931028 master-0 kubenswrapper[4031]: I0318 08:48:15.928879 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" Mar 18 08:48:15.939122 master-0 kubenswrapper[4031]: I0318 08:48:15.939043 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r"] Mar 18 08:48:15.940171 master-0 kubenswrapper[4031]: I0318 08:48:15.940137 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx"] Mar 18 08:48:15.966781 master-0 kubenswrapper[4031]: W0318 08:48:15.966727 4031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb40ee8d1_83f1_4d5e_8a24_2c2dbd7edbdd.slice/crio-57683f550936db1e32a4ce9e0772053116a76decd109678101700be85f0fac15 WatchSource:0}: Error finding container 57683f550936db1e32a4ce9e0772053116a76decd109678101700be85f0fac15: Status 404 returned error can't find the container with id 57683f550936db1e32a4ce9e0772053116a76decd109678101700be85f0fac15 Mar 18 08:48:16.032766 master-0 kubenswrapper[4031]: I0318 08:48:16.032734 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k"] Mar 18 08:48:16.033576 master-0 kubenswrapper[4031]: I0318 08:48:16.033535 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" Mar 18 08:48:16.051742 master-0 kubenswrapper[4031]: I0318 08:48:16.051700 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:16.055189 master-0 kubenswrapper[4031]: W0318 08:48:16.055148 4031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f827195_f68d_4bd2_865b_a1f041a5c73e.slice/crio-8e583348603a749d4e556bba036d041822b1ae41ee887fca821dc33edae65947 WatchSource:0}: Error finding container 8e583348603a749d4e556bba036d041822b1ae41ee887fca821dc33edae65947: Status 404 returned error can't find the container with id 8e583348603a749d4e556bba036d041822b1ae41ee887fca821dc33edae65947 Mar 18 08:48:16.057122 master-0 kubenswrapper[4031]: I0318 08:48:16.056995 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" Mar 18 08:48:16.071261 master-0 kubenswrapper[4031]: I0318 08:48:16.069700 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 08:48:16.088159 master-0 kubenswrapper[4031]: I0318 08:48:16.087929 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc"] Mar 18 08:48:16.108173 master-0 kubenswrapper[4031]: I0318 08:48:16.108079 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-lhcpp"] Mar 18 08:48:16.143792 master-0 kubenswrapper[4031]: I0318 08:48:16.140011 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl"] Mar 18 08:48:16.174230 master-0 kubenswrapper[4031]: I0318 08:48:16.174190 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc"] Mar 18 08:48:16.176172 master-0 kubenswrapper[4031]: I0318 08:48:16.176128 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:16.176242 master-0 kubenswrapper[4031]: I0318 08:48:16.176180 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert\") pod \"catalog-operator-68f85b4d6c-hhn7l\" (UID: \"f6833a48-fccb-42bd-ac90-29f08d5bf7e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:48:16.176242 master-0 kubenswrapper[4031]: I0318 08:48:16.176222 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-k6xp5\" (UID: \"2d0da6e3-3887-4361-8eae-e7447f9ff72c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:48:16.176340 master-0 kubenswrapper[4031]: I0318 08:48:16.176252 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls\") pod \"dns-operator-9c5679d8f-2649q\" (UID: \"4192ea44-a38c-4b70-93c3-8070da2ffe2f\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 08:48:16.176340 master-0 kubenswrapper[4031]: I0318 08:48:16.176279 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert\") pod \"olm-operator-5c9796789-twp27\" (UID: \"c00ee838-424f-482b-942f-08f0952a5ccd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:48:16.176340 master-0 kubenswrapper[4031]: I0318 08:48:16.176304 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:16.177275 master-0 kubenswrapper[4031]: I0318 08:48:16.177241 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:16.177347 master-0 kubenswrapper[4031]: I0318 08:48:16.177318 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:16.177386 master-0 kubenswrapper[4031]: I0318 08:48:16.177347 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:16.177423 master-0 kubenswrapper[4031]: I0318 08:48:16.177401 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:16.177463 master-0 kubenswrapper[4031]: I0318 08:48:16.177437 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-25rbq\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:48:16.177603 master-0 kubenswrapper[4031]: E0318 08:48:16.177557 4031 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 08:48:16.177684 master-0 kubenswrapper[4031]: E0318 08:48:16.177651 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs podName:7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac nodeName:}" failed. No retries permitted until 2026-03-18 08:48:17.177632526 +0000 UTC m=+112.107157536 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-25rbq" (UID: "7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac") : secret "multus-admission-controller-secret" not found Mar 18 08:48:16.179125 master-0 kubenswrapper[4031]: E0318 08:48:16.178077 4031 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 08:48:16.179125 master-0 kubenswrapper[4031]: E0318 08:48:16.178113 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls podName:6c56e1ac-8752-4e46-8692-93716087f0e0 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:17.178104237 +0000 UTC m=+112.107629247 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-c4lgf" (UID: "6c56e1ac-8752-4e46-8692-93716087f0e0") : secret "image-registry-operator-tls" not found Mar 18 08:48:16.179125 master-0 kubenswrapper[4031]: E0318 08:48:16.178159 4031 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:16.179125 master-0 kubenswrapper[4031]: E0318 08:48:16.178182 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls podName:bf7a3329-a04c-4b58-9364-b907c00cbe08 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:17.178175589 +0000 UTC m=+112.107700599 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls") pod "ingress-operator-66b84d69b-4cxfh" (UID: "bf7a3329-a04c-4b58-9364-b907c00cbe08") : secret "metrics-tls" not found Mar 18 08:48:16.179125 master-0 kubenswrapper[4031]: E0318 08:48:16.178223 4031 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 08:48:16.179125 master-0 kubenswrapper[4031]: E0318 08:48:16.178241 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls podName:1deb139f-1903-417e-835c-28abdd156cdb nodeName:}" failed. No retries permitted until 2026-03-18 08:48:17.17823562 +0000 UTC m=+112.107760630 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-9s8lp" (UID: "1deb139f-1903-417e-835c-28abdd156cdb") : secret "node-tuning-operator-tls" not found Mar 18 08:48:16.179125 master-0 kubenswrapper[4031]: E0318 08:48:16.178280 4031 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:16.179125 master-0 kubenswrapper[4031]: E0318 08:48:16.178303 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert podName:1deb139f-1903-417e-835c-28abdd156cdb nodeName:}" failed. No retries permitted until 2026-03-18 08:48:17.178296301 +0000 UTC m=+112.107821311 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-9s8lp" (UID: "1deb139f-1903-417e-835c-28abdd156cdb") : secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:16.179125 master-0 kubenswrapper[4031]: E0318 08:48:16.178351 4031 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 08:48:16.179125 master-0 kubenswrapper[4031]: E0318 08:48:16.178373 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert podName:2d0da6e3-3887-4361-8eae-e7447f9ff72c nodeName:}" failed. No retries permitted until 2026-03-18 08:48:17.178367103 +0000 UTC m=+112.107892113 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-k6xp5" (UID: "2d0da6e3-3887-4361-8eae-e7447f9ff72c") : secret "package-server-manager-serving-cert" not found Mar 18 08:48:16.179125 master-0 kubenswrapper[4031]: E0318 08:48:16.178412 4031 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 08:48:16.179125 master-0 kubenswrapper[4031]: E0318 08:48:16.178431 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert podName:f6833a48-fccb-42bd-ac90-29f08d5bf7e8 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:17.178423184 +0000 UTC m=+112.107948194 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert") pod "catalog-operator-68f85b4d6c-hhn7l" (UID: "f6833a48-fccb-42bd-ac90-29f08d5bf7e8") : secret "catalog-operator-serving-cert" not found Mar 18 08:48:16.179125 master-0 kubenswrapper[4031]: E0318 08:48:16.178472 4031 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:16.179125 master-0 kubenswrapper[4031]: E0318 08:48:16.178492 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls podName:4192ea44-a38c-4b70-93c3-8070da2ffe2f nodeName:}" failed. No retries permitted until 2026-03-18 08:48:17.178486246 +0000 UTC m=+112.108011256 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls") pod "dns-operator-9c5679d8f-2649q" (UID: "4192ea44-a38c-4b70-93c3-8070da2ffe2f") : secret "metrics-tls" not found Mar 18 08:48:16.179125 master-0 kubenswrapper[4031]: E0318 08:48:16.178556 4031 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 08:48:16.181323 master-0 kubenswrapper[4031]: E0318 08:48:16.178601 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert podName:c00ee838-424f-482b-942f-08f0952a5ccd nodeName:}" failed. No retries permitted until 2026-03-18 08:48:17.178590778 +0000 UTC m=+112.108115798 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert") pod "olm-operator-5c9796789-twp27" (UID: "c00ee838-424f-482b-942f-08f0952a5ccd") : secret "olm-operator-serving-cert" not found Mar 18 08:48:16.181323 master-0 kubenswrapper[4031]: E0318 08:48:16.179482 4031 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 08:48:16.181323 master-0 kubenswrapper[4031]: E0318 08:48:16.179539 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics podName:ca9d4694-8675-47c5-819f-89bba9dcdc0f nodeName:}" failed. No retries permitted until 2026-03-18 08:48:17.179524669 +0000 UTC m=+112.109049679 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-m862c" (UID: "ca9d4694-8675-47c5-819f-89bba9dcdc0f") : secret "marketplace-operator-metrics" not found Mar 18 08:48:16.200976 master-0 kubenswrapper[4031]: E0318 08:48:16.200924 4031 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:16.201052 master-0 kubenswrapper[4031]: E0318 08:48:16.201039 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls podName:09269324-c908-474d-818f-5cd49406f1e2 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:17.201006042 +0000 UTC m=+112.130531052 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-8vfjr" (UID: "09269324-c908-474d-818f-5cd49406f1e2") : secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:16.218427 master-0 kubenswrapper[4031]: I0318 08:48:16.218379 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9"] Mar 18 08:48:16.230732 master-0 kubenswrapper[4031]: W0318 08:48:16.230123 4031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f6a7f55_84bd_4ea5_8248_4cb565904c3b.slice/crio-b988232227aa085a178c31ee083231aab09e2347dc1af469f15feb00c41b0d1d WatchSource:0}: Error finding container b988232227aa085a178c31ee083231aab09e2347dc1af469f15feb00c41b0d1d: Status 404 returned error can't find the container with id b988232227aa085a178c31ee083231aab09e2347dc1af469f15feb00c41b0d1d Mar 18 08:48:16.233042 master-0 kubenswrapper[4031]: I0318 08:48:16.233004 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p"] Mar 18 08:48:16.264437 master-0 kubenswrapper[4031]: I0318 08:48:16.264251 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9"] Mar 18 08:48:16.275782 master-0 kubenswrapper[4031]: I0318 08:48:16.275756 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr"] Mar 18 08:48:16.279812 master-0 kubenswrapper[4031]: W0318 08:48:16.279773 4031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65cff83a_8d8f_4e4f_96ef_99941c29ba53.slice/crio-8635320a4b36d9fe143361fc99701a8c79f38835939cb2e0d3b2cf8ebf88349b WatchSource:0}: Error finding container 8635320a4b36d9fe143361fc99701a8c79f38835939cb2e0d3b2cf8ebf88349b: Status 404 returned error can't find the container with id 8635320a4b36d9fe143361fc99701a8c79f38835939cb2e0d3b2cf8ebf88349b Mar 18 08:48:16.280910 master-0 kubenswrapper[4031]: W0318 08:48:16.280885 4031 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f9ba06c_7a6b_4f46_a747_80b0a0b58600.slice/crio-ba34b3933aeb088c8a44bf92497699577ba58b333ed292879af37568495f962f WatchSource:0}: Error finding container ba34b3933aeb088c8a44bf92497699577ba58b333ed292879af37568495f962f: Status 404 returned error can't find the container with id ba34b3933aeb088c8a44bf92497699577ba58b333ed292879af37568495f962f Mar 18 08:48:16.310418 master-0 kubenswrapper[4031]: I0318 08:48:16.308212 4031 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl"] Mar 18 08:48:16.320806 master-0 kubenswrapper[4031]: E0318 08:48:16.320733 4031 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:etcd-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a,Command:[cluster-etcd-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml --terminate-on-files=/var/run/secrets/serving-cert/tls.crt --terminate-on-files=/var/run/secrets/serving-cert/tls.key --terminate-on-files=/var/run/secrets/etcd-client/tls.crt --terminate-on-files=/var/run/secrets/etcd-client/tls.key --terminate-on-files=/var/run/configmaps/etcd-ca/ca-bundle.crt --terminate-on-files=/var/run/configmaps/etcd-service-ca/service-ca.crt],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},EnvVar{Name:OPENSHIFT_PROFILE,Value:web,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-ca,ReadOnly:false,MountPath:/var/run/configmaps/etcd-ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-service-ca,ReadOnly:false,MountPath:/var/run/configmaps/etcd-service-ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-client,ReadOnly:false,MountPath:/var/run/secrets/etcd-client,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2mj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:30,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod etcd-operator-8544cbcf9c-f2nfl_openshift-etcd-operator(bb6ef4c4-bff3-4559-8e42-582bbd668b7c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 18 08:48:16.322263 master-0 kubenswrapper[4031]: E0318 08:48:16.322208 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd-operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" podUID="bb6ef4c4-bff3-4559-8e42-582bbd668b7c" Mar 18 08:48:16.417194 master-0 kubenswrapper[4031]: I0318 08:48:16.417147 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" event={"ID":"81eefe1b-f683-4740-8fb0-0a5050f9b4a4","Type":"ContainerStarted","Data":"2268116be19023b1c8385358efae4da2f05525a23575585605fbe5052dde322b"} Mar 18 08:48:16.418716 master-0 kubenswrapper[4031]: I0318 08:48:16.418521 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" event={"ID":"be2682e4-cb63-4102-a83e-ef28023e273a","Type":"ContainerStarted","Data":"f81c411903140f1ed67af182269cee687c3cf33776c637366fe64b8e9cc8279e"} Mar 18 08:48:16.421378 master-0 kubenswrapper[4031]: I0318 08:48:16.421334 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" event={"ID":"65cff83a-8d8f-4e4f-96ef-99941c29ba53","Type":"ContainerStarted","Data":"8635320a4b36d9fe143361fc99701a8c79f38835939cb2e0d3b2cf8ebf88349b"} Mar 18 08:48:16.424044 master-0 kubenswrapper[4031]: I0318 08:48:16.423979 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" event={"ID":"0f6a7f55-84bd-4ea5-8248-4cb565904c3b","Type":"ContainerStarted","Data":"b988232227aa085a178c31ee083231aab09e2347dc1af469f15feb00c41b0d1d"} Mar 18 08:48:16.425041 master-0 kubenswrapper[4031]: I0318 08:48:16.424995 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" event={"ID":"0f9ba06c-7a6b-4f46-a747-80b0a0b58600","Type":"ContainerStarted","Data":"ba34b3933aeb088c8a44bf92497699577ba58b333ed292879af37568495f962f"} Mar 18 08:48:16.426098 master-0 kubenswrapper[4031]: I0318 08:48:16.426067 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" event={"ID":"95143c61-6f91-4cd4-9411-31c2fb75d4d0","Type":"ContainerStarted","Data":"c445746454631d8ce061d0857763b308446517ac6a8ca09e1933cec8fcfb6a97"} Mar 18 08:48:16.427402 master-0 kubenswrapper[4031]: I0318 08:48:16.427371 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-vr4gq" event={"ID":"600c92a1-56c5-497b-a8f0-746830f4180e","Type":"ContainerStarted","Data":"fd3388055ed633bef8e022a8b09742a25d6085b3bb671bd2342375ed6f18da63"} Mar 18 08:48:16.431730 master-0 kubenswrapper[4031]: I0318 08:48:16.431695 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" event={"ID":"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac","Type":"ContainerStarted","Data":"d6ff7b83413c43450a6bf628dcc2a6106bc260e7200bd01ce6f1ed9cc232ecc2"} Mar 18 08:48:16.432974 master-0 kubenswrapper[4031]: I0318 08:48:16.432911 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" event={"ID":"e86268c9-7a83-4ccb-979a-feff00cb4b3e","Type":"ContainerStarted","Data":"26feed0c101f6d451867599cf55613a680653ef7d844a071df5d94dd231f464f"} Mar 18 08:48:16.434102 master-0 kubenswrapper[4031]: I0318 08:48:16.434068 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" event={"ID":"bb6ef4c4-bff3-4559-8e42-582bbd668b7c","Type":"ContainerStarted","Data":"3827efb6815dbb16a6fe46aec77900fafde56c2e8c5cdf8a95de12d8f38843f8"} Mar 18 08:48:16.435514 master-0 kubenswrapper[4031]: E0318 08:48:16.435460 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"\"" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" podUID="bb6ef4c4-bff3-4559-8e42-582bbd668b7c" Mar 18 08:48:16.438383 master-0 kubenswrapper[4031]: I0318 08:48:16.438113 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" event={"ID":"5f827195-f68d-4bd2-865b-a1f041a5c73e","Type":"ContainerStarted","Data":"8e583348603a749d4e556bba036d041822b1ae41ee887fca821dc33edae65947"} Mar 18 08:48:16.441329 master-0 kubenswrapper[4031]: I0318 08:48:16.441279 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-lhcpp" event={"ID":"c5c995cf-40a0-4cd6-87fa-96a522f7bc57","Type":"ContainerStarted","Data":"4a9c798432c4910d57904b2bd4d441bf0df0839546f138cc70e48ec5d9012c6a"} Mar 18 08:48:16.442454 master-0 kubenswrapper[4031]: I0318 08:48:16.442427 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" event={"ID":"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd","Type":"ContainerStarted","Data":"57683f550936db1e32a4ce9e0772053116a76decd109678101700be85f0fac15"} Mar 18 08:48:16.893039 master-0 kubenswrapper[4031]: I0318 08:48:16.892440 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:48:16.893039 master-0 kubenswrapper[4031]: I0318 08:48:16.892445 4031 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:48:16.896243 master-0 kubenswrapper[4031]: I0318 08:48:16.894521 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 08:48:16.896243 master-0 kubenswrapper[4031]: I0318 08:48:16.894916 4031 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 08:48:16.898213 master-0 kubenswrapper[4031]: I0318 08:48:16.897969 4031 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 08:48:17.191777 master-0 kubenswrapper[4031]: I0318 08:48:17.191593 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls\") pod \"dns-operator-9c5679d8f-2649q\" (UID: \"4192ea44-a38c-4b70-93c3-8070da2ffe2f\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 08:48:17.191777 master-0 kubenswrapper[4031]: I0318 08:48:17.191666 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert\") pod \"olm-operator-5c9796789-twp27\" (UID: \"c00ee838-424f-482b-942f-08f0952a5ccd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:48:17.191996 master-0 kubenswrapper[4031]: I0318 08:48:17.191766 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:17.191996 master-0 kubenswrapper[4031]: I0318 08:48:17.191820 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:17.191996 master-0 kubenswrapper[4031]: E0318 08:48:17.191786 4031 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 08:48:17.191996 master-0 kubenswrapper[4031]: E0318 08:48:17.191869 4031 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 08:48:17.191996 master-0 kubenswrapper[4031]: E0318 08:48:17.191918 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert podName:c00ee838-424f-482b-942f-08f0952a5ccd nodeName:}" failed. No retries permitted until 2026-03-18 08:48:19.191900828 +0000 UTC m=+114.121425838 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert") pod "olm-operator-5c9796789-twp27" (UID: "c00ee838-424f-482b-942f-08f0952a5ccd") : secret "olm-operator-serving-cert" not found Mar 18 08:48:17.191996 master-0 kubenswrapper[4031]: E0318 08:48:17.191934 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics podName:ca9d4694-8675-47c5-819f-89bba9dcdc0f nodeName:}" failed. No retries permitted until 2026-03-18 08:48:19.191927939 +0000 UTC m=+114.121452949 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-m862c" (UID: "ca9d4694-8675-47c5-819f-89bba9dcdc0f") : secret "marketplace-operator-metrics" not found Mar 18 08:48:17.191996 master-0 kubenswrapper[4031]: E0318 08:48:17.191945 4031 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 08:48:17.191996 master-0 kubenswrapper[4031]: E0318 08:48:17.191981 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls podName:6c56e1ac-8752-4e46-8692-93716087f0e0 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:19.191962909 +0000 UTC m=+114.121487999 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-c4lgf" (UID: "6c56e1ac-8752-4e46-8692-93716087f0e0") : secret "image-registry-operator-tls" not found Mar 18 08:48:17.191996 master-0 kubenswrapper[4031]: I0318 08:48:17.191948 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:17.192386 master-0 kubenswrapper[4031]: E0318 08:48:17.192013 4031 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 08:48:17.192386 master-0 kubenswrapper[4031]: I0318 08:48:17.192020 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:17.192386 master-0 kubenswrapper[4031]: E0318 08:48:17.192032 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls podName:1deb139f-1903-417e-835c-28abdd156cdb nodeName:}" failed. No retries permitted until 2026-03-18 08:48:19.192026931 +0000 UTC m=+114.121551941 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-9s8lp" (UID: "1deb139f-1903-417e-835c-28abdd156cdb") : secret "node-tuning-operator-tls" not found Mar 18 08:48:17.192386 master-0 kubenswrapper[4031]: E0318 08:48:17.191822 4031 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:17.192386 master-0 kubenswrapper[4031]: I0318 08:48:17.192057 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:17.192386 master-0 kubenswrapper[4031]: E0318 08:48:17.192061 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls podName:4192ea44-a38c-4b70-93c3-8070da2ffe2f nodeName:}" failed. No retries permitted until 2026-03-18 08:48:19.192053701 +0000 UTC m=+114.121578821 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls") pod "dns-operator-9c5679d8f-2649q" (UID: "4192ea44-a38c-4b70-93c3-8070da2ffe2f") : secret "metrics-tls" not found Mar 18 08:48:17.192386 master-0 kubenswrapper[4031]: I0318 08:48:17.192088 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-25rbq\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:48:17.192386 master-0 kubenswrapper[4031]: I0318 08:48:17.192125 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert\") pod \"catalog-operator-68f85b4d6c-hhn7l\" (UID: \"f6833a48-fccb-42bd-ac90-29f08d5bf7e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:48:17.192386 master-0 kubenswrapper[4031]: I0318 08:48:17.192150 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-k6xp5\" (UID: \"2d0da6e3-3887-4361-8eae-e7447f9ff72c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:48:17.192386 master-0 kubenswrapper[4031]: E0318 08:48:17.192206 4031 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 08:48:17.192386 master-0 kubenswrapper[4031]: E0318 08:48:17.192227 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert podName:2d0da6e3-3887-4361-8eae-e7447f9ff72c nodeName:}" failed. No retries permitted until 2026-03-18 08:48:19.192219515 +0000 UTC m=+114.121744525 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-k6xp5" (UID: "2d0da6e3-3887-4361-8eae-e7447f9ff72c") : secret "package-server-manager-serving-cert" not found Mar 18 08:48:17.192386 master-0 kubenswrapper[4031]: E0318 08:48:17.192256 4031 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:17.192386 master-0 kubenswrapper[4031]: E0318 08:48:17.192274 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls podName:bf7a3329-a04c-4b58-9364-b907c00cbe08 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:19.192267096 +0000 UTC m=+114.121792106 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls") pod "ingress-operator-66b84d69b-4cxfh" (UID: "bf7a3329-a04c-4b58-9364-b907c00cbe08") : secret "metrics-tls" not found Mar 18 08:48:17.192386 master-0 kubenswrapper[4031]: E0318 08:48:17.192304 4031 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:17.192386 master-0 kubenswrapper[4031]: E0318 08:48:17.192320 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert podName:1deb139f-1903-417e-835c-28abdd156cdb nodeName:}" failed. No retries permitted until 2026-03-18 08:48:19.192315257 +0000 UTC m=+114.121840267 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-9s8lp" (UID: "1deb139f-1903-417e-835c-28abdd156cdb") : secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:17.193032 master-0 kubenswrapper[4031]: E0318 08:48:17.192350 4031 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 08:48:17.193032 master-0 kubenswrapper[4031]: E0318 08:48:17.192366 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs podName:7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac nodeName:}" failed. No retries permitted until 2026-03-18 08:48:19.192360688 +0000 UTC m=+114.121885698 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-25rbq" (UID: "7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac") : secret "multus-admission-controller-secret" not found Mar 18 08:48:17.193032 master-0 kubenswrapper[4031]: E0318 08:48:17.192399 4031 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 08:48:17.193032 master-0 kubenswrapper[4031]: E0318 08:48:17.192429 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert podName:f6833a48-fccb-42bd-ac90-29f08d5bf7e8 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:19.19242146 +0000 UTC m=+114.121946470 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert") pod "catalog-operator-68f85b4d6c-hhn7l" (UID: "f6833a48-fccb-42bd-ac90-29f08d5bf7e8") : secret "catalog-operator-serving-cert" not found Mar 18 08:48:17.293687 master-0 kubenswrapper[4031]: I0318 08:48:17.293582 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:17.293835 master-0 kubenswrapper[4031]: E0318 08:48:17.293744 4031 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:17.293835 master-0 kubenswrapper[4031]: E0318 08:48:17.293812 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls podName:09269324-c908-474d-818f-5cd49406f1e2 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:19.29379552 +0000 UTC m=+114.223320530 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-8vfjr" (UID: "09269324-c908-474d-818f-5cd49406f1e2") : secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:17.450585 master-0 kubenswrapper[4031]: I0318 08:48:17.450145 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" event={"ID":"65cff83a-8d8f-4e4f-96ef-99941c29ba53","Type":"ContainerStarted","Data":"e7040e73164a56f089f0acc8e8f60bd6ac708b6b6770784a34fbb303688099ef"} Mar 18 08:48:17.453519 master-0 kubenswrapper[4031]: I0318 08:48:17.453019 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" event={"ID":"95143c61-6f91-4cd4-9411-31c2fb75d4d0","Type":"ContainerStarted","Data":"cca28a804f84553b8b1a53af19f79b42304859cf6bff54e57401c4419c4a7e40"} Mar 18 08:48:17.454870 master-0 kubenswrapper[4031]: E0318 08:48:17.454829 4031 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"\"" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" podUID="bb6ef4c4-bff3-4559-8e42-582bbd668b7c" Mar 18 08:48:17.465678 master-0 kubenswrapper[4031]: I0318 08:48:17.465607 4031 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" podStartSLOduration=76.465557613 podStartE2EDuration="1m16.465557613s" podCreationTimestamp="2026-03-18 08:47:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:48:17.465382569 +0000 UTC m=+112.394907659" watchObservedRunningTime="2026-03-18 08:48:17.465557613 +0000 UTC m=+112.395082633" Mar 18 08:48:18.457698 master-0 kubenswrapper[4031]: I0318 08:48:18.457653 4031 generic.go:334] "Generic (PLEG): container finished" podID="95143c61-6f91-4cd4-9411-31c2fb75d4d0" containerID="cca28a804f84553b8b1a53af19f79b42304859cf6bff54e57401c4419c4a7e40" exitCode=0 Mar 18 08:48:18.458508 master-0 kubenswrapper[4031]: I0318 08:48:18.458391 4031 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" event={"ID":"95143c61-6f91-4cd4-9411-31c2fb75d4d0","Type":"ContainerDied","Data":"cca28a804f84553b8b1a53af19f79b42304859cf6bff54e57401c4419c4a7e40"} Mar 18 08:48:19.214039 master-0 kubenswrapper[4031]: I0318 08:48:19.213762 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert\") pod \"catalog-operator-68f85b4d6c-hhn7l\" (UID: \"f6833a48-fccb-42bd-ac90-29f08d5bf7e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:48:19.214144 master-0 kubenswrapper[4031]: I0318 08:48:19.214048 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-k6xp5\" (UID: \"2d0da6e3-3887-4361-8eae-e7447f9ff72c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:48:19.214144 master-0 kubenswrapper[4031]: I0318 08:48:19.214080 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls\") pod \"dns-operator-9c5679d8f-2649q\" (UID: \"4192ea44-a38c-4b70-93c3-8070da2ffe2f\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 08:48:19.214144 master-0 kubenswrapper[4031]: I0318 08:48:19.214102 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert\") pod \"olm-operator-5c9796789-twp27\" (UID: \"c00ee838-424f-482b-942f-08f0952a5ccd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:48:19.214144 master-0 kubenswrapper[4031]: I0318 08:48:19.214118 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:19.214144 master-0 kubenswrapper[4031]: I0318 08:48:19.214137 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:19.214335 master-0 kubenswrapper[4031]: I0318 08:48:19.214153 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:19.214335 master-0 kubenswrapper[4031]: I0318 08:48:19.214180 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:19.214335 master-0 kubenswrapper[4031]: I0318 08:48:19.214261 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:19.214335 master-0 kubenswrapper[4031]: I0318 08:48:19.214281 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-25rbq\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:48:19.214526 master-0 kubenswrapper[4031]: E0318 08:48:19.213951 4031 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 08:48:19.214526 master-0 kubenswrapper[4031]: E0318 08:48:19.214393 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert podName:f6833a48-fccb-42bd-ac90-29f08d5bf7e8 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:23.214373946 +0000 UTC m=+118.143898956 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert") pod "catalog-operator-68f85b4d6c-hhn7l" (UID: "f6833a48-fccb-42bd-ac90-29f08d5bf7e8") : secret "catalog-operator-serving-cert" not found Mar 18 08:48:19.214799 master-0 kubenswrapper[4031]: E0318 08:48:19.214773 4031 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 08:48:19.214843 master-0 kubenswrapper[4031]: E0318 08:48:19.214815 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert podName:2d0da6e3-3887-4361-8eae-e7447f9ff72c nodeName:}" failed. No retries permitted until 2026-03-18 08:48:23.214803276 +0000 UTC m=+118.144328296 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-k6xp5" (UID: "2d0da6e3-3887-4361-8eae-e7447f9ff72c") : secret "package-server-manager-serving-cert" not found Mar 18 08:48:19.214908 master-0 kubenswrapper[4031]: E0318 08:48:19.214862 4031 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:19.214908 master-0 kubenswrapper[4031]: E0318 08:48:19.214886 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls podName:4192ea44-a38c-4b70-93c3-8070da2ffe2f nodeName:}" failed. No retries permitted until 2026-03-18 08:48:23.214877688 +0000 UTC m=+118.144402698 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls") pod "dns-operator-9c5679d8f-2649q" (UID: "4192ea44-a38c-4b70-93c3-8070da2ffe2f") : secret "metrics-tls" not found Mar 18 08:48:19.214964 master-0 kubenswrapper[4031]: E0318 08:48:19.214931 4031 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 08:48:19.214964 master-0 kubenswrapper[4031]: E0318 08:48:19.214955 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert podName:c00ee838-424f-482b-942f-08f0952a5ccd nodeName:}" failed. No retries permitted until 2026-03-18 08:48:23.214947929 +0000 UTC m=+118.144472939 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert") pod "olm-operator-5c9796789-twp27" (UID: "c00ee838-424f-482b-942f-08f0952a5ccd") : secret "olm-operator-serving-cert" not found Mar 18 08:48:19.215021 master-0 kubenswrapper[4031]: E0318 08:48:19.214999 4031 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 08:48:19.215052 master-0 kubenswrapper[4031]: E0318 08:48:19.215023 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics podName:ca9d4694-8675-47c5-819f-89bba9dcdc0f nodeName:}" failed. No retries permitted until 2026-03-18 08:48:23.215015951 +0000 UTC m=+118.144540961 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-m862c" (UID: "ca9d4694-8675-47c5-819f-89bba9dcdc0f") : secret "marketplace-operator-metrics" not found Mar 18 08:48:19.215084 master-0 kubenswrapper[4031]: E0318 08:48:19.215071 4031 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 08:48:19.215084 master-0 kubenswrapper[4031]: E0318 08:48:19.215100 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls podName:6c56e1ac-8752-4e46-8692-93716087f0e0 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:23.215090252 +0000 UTC m=+118.144615262 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-c4lgf" (UID: "6c56e1ac-8752-4e46-8692-93716087f0e0") : secret "image-registry-operator-tls" not found Mar 18 08:48:19.215164 master-0 kubenswrapper[4031]: E0318 08:48:19.215143 4031 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 08:48:19.215164 master-0 kubenswrapper[4031]: E0318 08:48:19.215163 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls podName:1deb139f-1903-417e-835c-28abdd156cdb nodeName:}" failed. No retries permitted until 2026-03-18 08:48:23.215156174 +0000 UTC m=+118.144681184 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-9s8lp" (UID: "1deb139f-1903-417e-835c-28abdd156cdb") : secret "node-tuning-operator-tls" not found Mar 18 08:48:19.215222 master-0 kubenswrapper[4031]: E0318 08:48:19.215196 4031 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:19.215222 master-0 kubenswrapper[4031]: E0318 08:48:19.215218 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls podName:bf7a3329-a04c-4b58-9364-b907c00cbe08 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:23.215210495 +0000 UTC m=+118.144735505 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls") pod "ingress-operator-66b84d69b-4cxfh" (UID: "bf7a3329-a04c-4b58-9364-b907c00cbe08") : secret "metrics-tls" not found Mar 18 08:48:19.215272 master-0 kubenswrapper[4031]: E0318 08:48:19.215259 4031 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:19.215700 master-0 kubenswrapper[4031]: E0318 08:48:19.214345 4031 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 08:48:19.215746 master-0 kubenswrapper[4031]: E0318 08:48:19.215714 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs podName:7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac nodeName:}" failed. No retries permitted until 2026-03-18 08:48:23.215703526 +0000 UTC m=+118.145228536 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-25rbq" (UID: "7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac") : secret "multus-admission-controller-secret" not found Mar 18 08:48:19.215777 master-0 kubenswrapper[4031]: E0318 08:48:19.215742 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert podName:1deb139f-1903-417e-835c-28abdd156cdb nodeName:}" failed. No retries permitted until 2026-03-18 08:48:23.215730717 +0000 UTC m=+118.145255727 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-9s8lp" (UID: "1deb139f-1903-417e-835c-28abdd156cdb") : secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:19.314915 master-0 kubenswrapper[4031]: I0318 08:48:19.314771 4031 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:19.315127 master-0 kubenswrapper[4031]: E0318 08:48:19.314946 4031 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:19.315127 master-0 kubenswrapper[4031]: E0318 08:48:19.315014 4031 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls podName:09269324-c908-474d-818f-5cd49406f1e2 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:23.31499638 +0000 UTC m=+118.244521390 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-8vfjr" (UID: "09269324-c908-474d-818f-5cd49406f1e2") : secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:20.306041 master-0 kubenswrapper[4031]: I0318 08:48:20.305968 4031 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 08:48:20.306062 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 18 08:48:20.338334 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 18 08:48:20.338762 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 18 08:48:20.340686 master-0 systemd[1]: kubelet.service: Consumed 9.043s CPU time. Mar 18 08:48:20.355232 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 18 08:48:20.470582 master-0 kubenswrapper[6976]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 08:48:20.470582 master-0 kubenswrapper[6976]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 18 08:48:20.470582 master-0 kubenswrapper[6976]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 08:48:20.471023 master-0 kubenswrapper[6976]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 08:48:20.471023 master-0 kubenswrapper[6976]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 18 08:48:20.471023 master-0 kubenswrapper[6976]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 08:48:20.471023 master-0 kubenswrapper[6976]: I0318 08:48:20.470689 6976 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 18 08:48:20.475346 master-0 kubenswrapper[6976]: W0318 08:48:20.475309 6976 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 08:48:20.475346 master-0 kubenswrapper[6976]: W0318 08:48:20.475332 6976 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 08:48:20.475346 master-0 kubenswrapper[6976]: W0318 08:48:20.475337 6976 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 08:48:20.475346 master-0 kubenswrapper[6976]: W0318 08:48:20.475347 6976 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 08:48:20.475346 master-0 kubenswrapper[6976]: W0318 08:48:20.475350 6976 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 08:48:20.475511 master-0 kubenswrapper[6976]: W0318 08:48:20.475355 6976 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 08:48:20.475511 master-0 kubenswrapper[6976]: W0318 08:48:20.475359 6976 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 08:48:20.475511 master-0 kubenswrapper[6976]: W0318 08:48:20.475363 6976 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 08:48:20.475511 master-0 kubenswrapper[6976]: W0318 08:48:20.475367 6976 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 08:48:20.475511 master-0 kubenswrapper[6976]: W0318 08:48:20.475371 6976 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 08:48:20.475511 master-0 kubenswrapper[6976]: W0318 08:48:20.475376 6976 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 08:48:20.475511 master-0 kubenswrapper[6976]: W0318 08:48:20.475381 6976 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 08:48:20.475511 master-0 kubenswrapper[6976]: W0318 08:48:20.475386 6976 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 08:48:20.475511 master-0 kubenswrapper[6976]: W0318 08:48:20.475391 6976 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 08:48:20.475511 master-0 kubenswrapper[6976]: W0318 08:48:20.475395 6976 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 08:48:20.475511 master-0 kubenswrapper[6976]: W0318 08:48:20.475402 6976 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 08:48:20.475511 master-0 kubenswrapper[6976]: W0318 08:48:20.475405 6976 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 08:48:20.475511 master-0 kubenswrapper[6976]: W0318 08:48:20.475409 6976 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 08:48:20.475511 master-0 kubenswrapper[6976]: W0318 08:48:20.475413 6976 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 08:48:20.475511 master-0 kubenswrapper[6976]: W0318 08:48:20.475417 6976 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 08:48:20.475511 master-0 kubenswrapper[6976]: W0318 08:48:20.475421 6976 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 08:48:20.475511 master-0 kubenswrapper[6976]: W0318 08:48:20.475425 6976 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 08:48:20.475511 master-0 kubenswrapper[6976]: W0318 08:48:20.475429 6976 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 08:48:20.475511 master-0 kubenswrapper[6976]: W0318 08:48:20.475432 6976 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 08:48:20.475511 master-0 kubenswrapper[6976]: W0318 08:48:20.475436 6976 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 08:48:20.475994 master-0 kubenswrapper[6976]: W0318 08:48:20.475441 6976 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 08:48:20.475994 master-0 kubenswrapper[6976]: W0318 08:48:20.475444 6976 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 08:48:20.475994 master-0 kubenswrapper[6976]: W0318 08:48:20.475448 6976 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 08:48:20.475994 master-0 kubenswrapper[6976]: W0318 08:48:20.475454 6976 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 08:48:20.475994 master-0 kubenswrapper[6976]: W0318 08:48:20.475458 6976 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 08:48:20.475994 master-0 kubenswrapper[6976]: W0318 08:48:20.475462 6976 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 08:48:20.475994 master-0 kubenswrapper[6976]: W0318 08:48:20.475466 6976 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 08:48:20.475994 master-0 kubenswrapper[6976]: W0318 08:48:20.475470 6976 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 08:48:20.475994 master-0 kubenswrapper[6976]: W0318 08:48:20.475474 6976 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 08:48:20.475994 master-0 kubenswrapper[6976]: W0318 08:48:20.475478 6976 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 08:48:20.475994 master-0 kubenswrapper[6976]: W0318 08:48:20.475483 6976 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 08:48:20.475994 master-0 kubenswrapper[6976]: W0318 08:48:20.475508 6976 feature_gate.go:330] unrecognized feature gate: Example Mar 18 08:48:20.475994 master-0 kubenswrapper[6976]: W0318 08:48:20.475657 6976 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 08:48:20.475994 master-0 kubenswrapper[6976]: W0318 08:48:20.475665 6976 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 08:48:20.475994 master-0 kubenswrapper[6976]: W0318 08:48:20.475671 6976 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 08:48:20.475994 master-0 kubenswrapper[6976]: W0318 08:48:20.475680 6976 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 08:48:20.475994 master-0 kubenswrapper[6976]: W0318 08:48:20.475686 6976 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 08:48:20.475994 master-0 kubenswrapper[6976]: W0318 08:48:20.475692 6976 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 08:48:20.475994 master-0 kubenswrapper[6976]: W0318 08:48:20.475697 6976 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 08:48:20.475994 master-0 kubenswrapper[6976]: W0318 08:48:20.475701 6976 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 08:48:20.476433 master-0 kubenswrapper[6976]: W0318 08:48:20.475708 6976 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 08:48:20.476433 master-0 kubenswrapper[6976]: W0318 08:48:20.475717 6976 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 08:48:20.476433 master-0 kubenswrapper[6976]: W0318 08:48:20.475722 6976 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 08:48:20.476433 master-0 kubenswrapper[6976]: W0318 08:48:20.475727 6976 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 08:48:20.476433 master-0 kubenswrapper[6976]: W0318 08:48:20.475732 6976 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 08:48:20.476433 master-0 kubenswrapper[6976]: W0318 08:48:20.475737 6976 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 08:48:20.476433 master-0 kubenswrapper[6976]: W0318 08:48:20.475741 6976 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 08:48:20.476433 master-0 kubenswrapper[6976]: W0318 08:48:20.475749 6976 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 08:48:20.476433 master-0 kubenswrapper[6976]: W0318 08:48:20.475754 6976 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 08:48:20.476433 master-0 kubenswrapper[6976]: W0318 08:48:20.475757 6976 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 08:48:20.476433 master-0 kubenswrapper[6976]: W0318 08:48:20.475761 6976 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 08:48:20.476433 master-0 kubenswrapper[6976]: W0318 08:48:20.475765 6976 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 08:48:20.476433 master-0 kubenswrapper[6976]: W0318 08:48:20.475768 6976 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 08:48:20.476433 master-0 kubenswrapper[6976]: W0318 08:48:20.475773 6976 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 08:48:20.476433 master-0 kubenswrapper[6976]: W0318 08:48:20.475777 6976 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 08:48:20.476433 master-0 kubenswrapper[6976]: W0318 08:48:20.475783 6976 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 08:48:20.476433 master-0 kubenswrapper[6976]: W0318 08:48:20.475791 6976 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 08:48:20.476433 master-0 kubenswrapper[6976]: W0318 08:48:20.475795 6976 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 08:48:20.476433 master-0 kubenswrapper[6976]: W0318 08:48:20.475801 6976 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 08:48:20.476433 master-0 kubenswrapper[6976]: W0318 08:48:20.475804 6976 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 08:48:20.476940 master-0 kubenswrapper[6976]: W0318 08:48:20.475808 6976 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 08:48:20.476940 master-0 kubenswrapper[6976]: W0318 08:48:20.475812 6976 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 08:48:20.476940 master-0 kubenswrapper[6976]: W0318 08:48:20.475815 6976 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 08:48:20.476940 master-0 kubenswrapper[6976]: W0318 08:48:20.475819 6976 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 08:48:20.476940 master-0 kubenswrapper[6976]: W0318 08:48:20.475822 6976 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 08:48:20.476940 master-0 kubenswrapper[6976]: W0318 08:48:20.475826 6976 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 08:48:20.476940 master-0 kubenswrapper[6976]: W0318 08:48:20.475830 6976 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 08:48:20.476940 master-0 kubenswrapper[6976]: I0318 08:48:20.475970 6976 flags.go:64] FLAG: --address="0.0.0.0" Mar 18 08:48:20.476940 master-0 kubenswrapper[6976]: I0318 08:48:20.475981 6976 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 18 08:48:20.476940 master-0 kubenswrapper[6976]: I0318 08:48:20.475992 6976 flags.go:64] FLAG: --anonymous-auth="true" Mar 18 08:48:20.476940 master-0 kubenswrapper[6976]: I0318 08:48:20.475999 6976 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 18 08:48:20.476940 master-0 kubenswrapper[6976]: I0318 08:48:20.476005 6976 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 18 08:48:20.476940 master-0 kubenswrapper[6976]: I0318 08:48:20.476010 6976 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 18 08:48:20.476940 master-0 kubenswrapper[6976]: I0318 08:48:20.476018 6976 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 18 08:48:20.476940 master-0 kubenswrapper[6976]: I0318 08:48:20.476024 6976 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 18 08:48:20.476940 master-0 kubenswrapper[6976]: I0318 08:48:20.476029 6976 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 18 08:48:20.476940 master-0 kubenswrapper[6976]: I0318 08:48:20.476034 6976 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 18 08:48:20.476940 master-0 kubenswrapper[6976]: I0318 08:48:20.476043 6976 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 18 08:48:20.476940 master-0 kubenswrapper[6976]: I0318 08:48:20.476049 6976 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 18 08:48:20.476940 master-0 kubenswrapper[6976]: I0318 08:48:20.476054 6976 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 18 08:48:20.476940 master-0 kubenswrapper[6976]: I0318 08:48:20.476059 6976 flags.go:64] FLAG: --cgroup-root="" Mar 18 08:48:20.476940 master-0 kubenswrapper[6976]: I0318 08:48:20.476063 6976 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476068 6976 flags.go:64] FLAG: --client-ca-file="" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476073 6976 flags.go:64] FLAG: --cloud-config="" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476077 6976 flags.go:64] FLAG: --cloud-provider="" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476090 6976 flags.go:64] FLAG: --cluster-dns="[]" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476096 6976 flags.go:64] FLAG: --cluster-domain="" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476101 6976 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476107 6976 flags.go:64] FLAG: --config-dir="" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476112 6976 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476125 6976 flags.go:64] FLAG: --container-log-max-files="5" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476132 6976 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476136 6976 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476141 6976 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476148 6976 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476154 6976 flags.go:64] FLAG: --contention-profiling="false" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476158 6976 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476163 6976 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476168 6976 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476172 6976 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476179 6976 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476184 6976 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476191 6976 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476195 6976 flags.go:64] FLAG: --enable-load-reader="false" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476200 6976 flags.go:64] FLAG: --enable-server="true" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476205 6976 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 18 08:48:20.477424 master-0 kubenswrapper[6976]: I0318 08:48:20.476211 6976 flags.go:64] FLAG: --event-burst="100" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476216 6976 flags.go:64] FLAG: --event-qps="50" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476221 6976 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476226 6976 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476230 6976 flags.go:64] FLAG: --eviction-hard="" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476239 6976 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476243 6976 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476248 6976 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476253 6976 flags.go:64] FLAG: --eviction-soft="" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476258 6976 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476262 6976 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476270 6976 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476274 6976 flags.go:64] FLAG: --experimental-mounter-path="" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476281 6976 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476287 6976 flags.go:64] FLAG: --fail-swap-on="true" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476291 6976 flags.go:64] FLAG: --feature-gates="" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476300 6976 flags.go:64] FLAG: --file-check-frequency="20s" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476305 6976 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476310 6976 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476314 6976 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476319 6976 flags.go:64] FLAG: --healthz-port="10248" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476325 6976 flags.go:64] FLAG: --help="false" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476330 6976 flags.go:64] FLAG: --hostname-override="" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476334 6976 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476339 6976 flags.go:64] FLAG: --http-check-frequency="20s" Mar 18 08:48:20.478039 master-0 kubenswrapper[6976]: I0318 08:48:20.476343 6976 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476347 6976 flags.go:64] FLAG: --image-credential-provider-config="" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476352 6976 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476356 6976 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476361 6976 flags.go:64] FLAG: --image-service-endpoint="" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476367 6976 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476372 6976 flags.go:64] FLAG: --kube-api-burst="100" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476377 6976 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476382 6976 flags.go:64] FLAG: --kube-api-qps="50" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476386 6976 flags.go:64] FLAG: --kube-reserved="" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476391 6976 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476395 6976 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476399 6976 flags.go:64] FLAG: --kubelet-cgroups="" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476404 6976 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476411 6976 flags.go:64] FLAG: --lock-file="" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476415 6976 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476420 6976 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476425 6976 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476434 6976 flags.go:64] FLAG: --log-json-split-stream="false" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476439 6976 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476444 6976 flags.go:64] FLAG: --log-text-split-stream="false" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476450 6976 flags.go:64] FLAG: --logging-format="text" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476456 6976 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476461 6976 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476466 6976 flags.go:64] FLAG: --manifest-url="" Mar 18 08:48:20.478551 master-0 kubenswrapper[6976]: I0318 08:48:20.476470 6976 flags.go:64] FLAG: --manifest-url-header="" Mar 18 08:48:20.479122 master-0 kubenswrapper[6976]: I0318 08:48:20.476476 6976 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 18 08:48:20.479122 master-0 kubenswrapper[6976]: I0318 08:48:20.476480 6976 flags.go:64] FLAG: --max-open-files="1000000" Mar 18 08:48:20.479122 master-0 kubenswrapper[6976]: I0318 08:48:20.476486 6976 flags.go:64] FLAG: --max-pods="110" Mar 18 08:48:20.479122 master-0 kubenswrapper[6976]: I0318 08:48:20.476490 6976 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 18 08:48:20.479122 master-0 kubenswrapper[6976]: I0318 08:48:20.476496 6976 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 18 08:48:20.479122 master-0 kubenswrapper[6976]: I0318 08:48:20.476501 6976 flags.go:64] FLAG: --memory-manager-policy="None" Mar 18 08:48:20.479122 master-0 kubenswrapper[6976]: I0318 08:48:20.476505 6976 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 18 08:48:20.479122 master-0 kubenswrapper[6976]: I0318 08:48:20.476510 6976 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 18 08:48:20.479122 master-0 kubenswrapper[6976]: I0318 08:48:20.476514 6976 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 18 08:48:20.479122 master-0 kubenswrapper[6976]: I0318 08:48:20.476518 6976 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 18 08:48:20.479122 master-0 kubenswrapper[6976]: I0318 08:48:20.476534 6976 flags.go:64] FLAG: --node-status-max-images="50" Mar 18 08:48:20.479122 master-0 kubenswrapper[6976]: I0318 08:48:20.476549 6976 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 18 08:48:20.479122 master-0 kubenswrapper[6976]: I0318 08:48:20.476556 6976 flags.go:64] FLAG: --oom-score-adj="-999" Mar 18 08:48:20.479122 master-0 kubenswrapper[6976]: I0318 08:48:20.476580 6976 flags.go:64] FLAG: --pod-cidr="" Mar 18 08:48:20.479122 master-0 kubenswrapper[6976]: I0318 08:48:20.476587 6976 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 18 08:48:20.479122 master-0 kubenswrapper[6976]: I0318 08:48:20.476596 6976 flags.go:64] FLAG: --pod-manifest-path="" Mar 18 08:48:20.479122 master-0 kubenswrapper[6976]: I0318 08:48:20.476601 6976 flags.go:64] FLAG: --pod-max-pids="-1" Mar 18 08:48:20.479122 master-0 kubenswrapper[6976]: I0318 08:48:20.476606 6976 flags.go:64] FLAG: --pods-per-core="0" Mar 18 08:48:20.479122 master-0 kubenswrapper[6976]: I0318 08:48:20.476612 6976 flags.go:64] FLAG: --port="10250" Mar 18 08:48:20.479122 master-0 kubenswrapper[6976]: I0318 08:48:20.476621 6976 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 18 08:48:20.479122 master-0 kubenswrapper[6976]: I0318 08:48:20.476627 6976 flags.go:64] FLAG: --provider-id="" Mar 18 08:48:20.479122 master-0 kubenswrapper[6976]: I0318 08:48:20.476632 6976 flags.go:64] FLAG: --qos-reserved="" Mar 18 08:48:20.479122 master-0 kubenswrapper[6976]: I0318 08:48:20.476638 6976 flags.go:64] FLAG: --read-only-port="10255" Mar 18 08:48:20.479122 master-0 kubenswrapper[6976]: I0318 08:48:20.476643 6976 flags.go:64] FLAG: --register-node="true" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476647 6976 flags.go:64] FLAG: --register-schedulable="true" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476654 6976 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476664 6976 flags.go:64] FLAG: --registry-burst="10" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476672 6976 flags.go:64] FLAG: --registry-qps="5" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476687 6976 flags.go:64] FLAG: --reserved-cpus="" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476692 6976 flags.go:64] FLAG: --reserved-memory="" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476698 6976 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476703 6976 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476708 6976 flags.go:64] FLAG: --rotate-certificates="false" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476712 6976 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476717 6976 flags.go:64] FLAG: --runonce="false" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476723 6976 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476728 6976 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476733 6976 flags.go:64] FLAG: --seccomp-default="false" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476737 6976 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476742 6976 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476746 6976 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476751 6976 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476756 6976 flags.go:64] FLAG: --storage-driver-password="root" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476780 6976 flags.go:64] FLAG: --storage-driver-secure="false" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476787 6976 flags.go:64] FLAG: --storage-driver-table="stats" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476792 6976 flags.go:64] FLAG: --storage-driver-user="root" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476797 6976 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476801 6976 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 18 08:48:20.479737 master-0 kubenswrapper[6976]: I0318 08:48:20.476806 6976 flags.go:64] FLAG: --system-cgroups="" Mar 18 08:48:20.480351 master-0 kubenswrapper[6976]: I0318 08:48:20.476810 6976 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 18 08:48:20.480351 master-0 kubenswrapper[6976]: I0318 08:48:20.476818 6976 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 18 08:48:20.480351 master-0 kubenswrapper[6976]: I0318 08:48:20.476825 6976 flags.go:64] FLAG: --tls-cert-file="" Mar 18 08:48:20.480351 master-0 kubenswrapper[6976]: I0318 08:48:20.476830 6976 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 18 08:48:20.480351 master-0 kubenswrapper[6976]: I0318 08:48:20.476835 6976 flags.go:64] FLAG: --tls-min-version="" Mar 18 08:48:20.480351 master-0 kubenswrapper[6976]: I0318 08:48:20.476840 6976 flags.go:64] FLAG: --tls-private-key-file="" Mar 18 08:48:20.480351 master-0 kubenswrapper[6976]: I0318 08:48:20.476845 6976 flags.go:64] FLAG: --topology-manager-policy="none" Mar 18 08:48:20.480351 master-0 kubenswrapper[6976]: I0318 08:48:20.476849 6976 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 18 08:48:20.480351 master-0 kubenswrapper[6976]: I0318 08:48:20.476856 6976 flags.go:64] FLAG: --topology-manager-scope="container" Mar 18 08:48:20.480351 master-0 kubenswrapper[6976]: I0318 08:48:20.476860 6976 flags.go:64] FLAG: --v="2" Mar 18 08:48:20.480351 master-0 kubenswrapper[6976]: I0318 08:48:20.476868 6976 flags.go:64] FLAG: --version="false" Mar 18 08:48:20.480351 master-0 kubenswrapper[6976]: I0318 08:48:20.476874 6976 flags.go:64] FLAG: --vmodule="" Mar 18 08:48:20.480351 master-0 kubenswrapper[6976]: I0318 08:48:20.476879 6976 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 18 08:48:20.480351 master-0 kubenswrapper[6976]: I0318 08:48:20.476884 6976 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 18 08:48:20.480351 master-0 kubenswrapper[6976]: W0318 08:48:20.477083 6976 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 08:48:20.480351 master-0 kubenswrapper[6976]: W0318 08:48:20.477090 6976 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 08:48:20.480351 master-0 kubenswrapper[6976]: W0318 08:48:20.477095 6976 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 08:48:20.480351 master-0 kubenswrapper[6976]: W0318 08:48:20.477099 6976 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 08:48:20.480351 master-0 kubenswrapper[6976]: W0318 08:48:20.477103 6976 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 08:48:20.480351 master-0 kubenswrapper[6976]: W0318 08:48:20.477107 6976 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 08:48:20.480351 master-0 kubenswrapper[6976]: W0318 08:48:20.477110 6976 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 08:48:20.480351 master-0 kubenswrapper[6976]: W0318 08:48:20.477115 6976 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 08:48:20.480872 master-0 kubenswrapper[6976]: W0318 08:48:20.477120 6976 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 08:48:20.480872 master-0 kubenswrapper[6976]: W0318 08:48:20.477126 6976 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 08:48:20.480872 master-0 kubenswrapper[6976]: W0318 08:48:20.477130 6976 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 08:48:20.480872 master-0 kubenswrapper[6976]: W0318 08:48:20.477135 6976 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 08:48:20.480872 master-0 kubenswrapper[6976]: W0318 08:48:20.477140 6976 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 08:48:20.480872 master-0 kubenswrapper[6976]: W0318 08:48:20.477144 6976 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 08:48:20.480872 master-0 kubenswrapper[6976]: W0318 08:48:20.477148 6976 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 08:48:20.480872 master-0 kubenswrapper[6976]: W0318 08:48:20.477152 6976 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 08:48:20.480872 master-0 kubenswrapper[6976]: W0318 08:48:20.477156 6976 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 08:48:20.480872 master-0 kubenswrapper[6976]: W0318 08:48:20.477160 6976 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 08:48:20.480872 master-0 kubenswrapper[6976]: W0318 08:48:20.477164 6976 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 08:48:20.480872 master-0 kubenswrapper[6976]: W0318 08:48:20.477168 6976 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 08:48:20.480872 master-0 kubenswrapper[6976]: W0318 08:48:20.477172 6976 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 08:48:20.480872 master-0 kubenswrapper[6976]: W0318 08:48:20.477197 6976 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 08:48:20.480872 master-0 kubenswrapper[6976]: W0318 08:48:20.477202 6976 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 08:48:20.480872 master-0 kubenswrapper[6976]: W0318 08:48:20.477206 6976 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 08:48:20.480872 master-0 kubenswrapper[6976]: W0318 08:48:20.477210 6976 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 08:48:20.480872 master-0 kubenswrapper[6976]: W0318 08:48:20.477214 6976 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 08:48:20.480872 master-0 kubenswrapper[6976]: W0318 08:48:20.477220 6976 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 08:48:20.480872 master-0 kubenswrapper[6976]: W0318 08:48:20.477223 6976 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 08:48:20.481529 master-0 kubenswrapper[6976]: W0318 08:48:20.477227 6976 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 08:48:20.481529 master-0 kubenswrapper[6976]: W0318 08:48:20.477231 6976 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 08:48:20.481529 master-0 kubenswrapper[6976]: W0318 08:48:20.477245 6976 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 08:48:20.481529 master-0 kubenswrapper[6976]: W0318 08:48:20.477249 6976 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 08:48:20.481529 master-0 kubenswrapper[6976]: W0318 08:48:20.477253 6976 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 08:48:20.481529 master-0 kubenswrapper[6976]: W0318 08:48:20.477256 6976 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 08:48:20.481529 master-0 kubenswrapper[6976]: W0318 08:48:20.477262 6976 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 08:48:20.481529 master-0 kubenswrapper[6976]: W0318 08:48:20.477266 6976 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 08:48:20.481529 master-0 kubenswrapper[6976]: W0318 08:48:20.477270 6976 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 08:48:20.481529 master-0 kubenswrapper[6976]: W0318 08:48:20.477274 6976 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 08:48:20.481529 master-0 kubenswrapper[6976]: W0318 08:48:20.477278 6976 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 08:48:20.481529 master-0 kubenswrapper[6976]: W0318 08:48:20.477281 6976 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 08:48:20.481529 master-0 kubenswrapper[6976]: W0318 08:48:20.477285 6976 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 08:48:20.481529 master-0 kubenswrapper[6976]: W0318 08:48:20.477289 6976 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 08:48:20.481529 master-0 kubenswrapper[6976]: W0318 08:48:20.477292 6976 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 08:48:20.481529 master-0 kubenswrapper[6976]: W0318 08:48:20.477296 6976 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 08:48:20.481529 master-0 kubenswrapper[6976]: W0318 08:48:20.477300 6976 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 08:48:20.481529 master-0 kubenswrapper[6976]: W0318 08:48:20.477303 6976 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 08:48:20.481529 master-0 kubenswrapper[6976]: W0318 08:48:20.477309 6976 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 08:48:20.481529 master-0 kubenswrapper[6976]: W0318 08:48:20.477313 6976 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 08:48:20.482007 master-0 kubenswrapper[6976]: W0318 08:48:20.477316 6976 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 08:48:20.482007 master-0 kubenswrapper[6976]: W0318 08:48:20.477320 6976 feature_gate.go:330] unrecognized feature gate: Example Mar 18 08:48:20.482007 master-0 kubenswrapper[6976]: W0318 08:48:20.477324 6976 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 08:48:20.482007 master-0 kubenswrapper[6976]: W0318 08:48:20.477328 6976 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 08:48:20.482007 master-0 kubenswrapper[6976]: W0318 08:48:20.477333 6976 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 08:48:20.482007 master-0 kubenswrapper[6976]: W0318 08:48:20.477337 6976 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 08:48:20.482007 master-0 kubenswrapper[6976]: W0318 08:48:20.477342 6976 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 08:48:20.482007 master-0 kubenswrapper[6976]: W0318 08:48:20.477345 6976 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 08:48:20.482007 master-0 kubenswrapper[6976]: W0318 08:48:20.477349 6976 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 08:48:20.482007 master-0 kubenswrapper[6976]: W0318 08:48:20.477354 6976 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 08:48:20.482007 master-0 kubenswrapper[6976]: W0318 08:48:20.477361 6976 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 08:48:20.482007 master-0 kubenswrapper[6976]: W0318 08:48:20.477365 6976 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 08:48:20.482007 master-0 kubenswrapper[6976]: W0318 08:48:20.477369 6976 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 08:48:20.482007 master-0 kubenswrapper[6976]: W0318 08:48:20.477372 6976 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 08:48:20.482007 master-0 kubenswrapper[6976]: W0318 08:48:20.477376 6976 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 08:48:20.482007 master-0 kubenswrapper[6976]: W0318 08:48:20.477380 6976 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 08:48:20.482007 master-0 kubenswrapper[6976]: W0318 08:48:20.477383 6976 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 08:48:20.482007 master-0 kubenswrapper[6976]: W0318 08:48:20.477387 6976 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 08:48:20.482007 master-0 kubenswrapper[6976]: W0318 08:48:20.477391 6976 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 08:48:20.482007 master-0 kubenswrapper[6976]: W0318 08:48:20.477394 6976 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 08:48:20.482458 master-0 kubenswrapper[6976]: W0318 08:48:20.477398 6976 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 08:48:20.482458 master-0 kubenswrapper[6976]: W0318 08:48:20.477401 6976 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 08:48:20.482458 master-0 kubenswrapper[6976]: W0318 08:48:20.477405 6976 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 08:48:20.482458 master-0 kubenswrapper[6976]: W0318 08:48:20.477411 6976 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 08:48:20.482458 master-0 kubenswrapper[6976]: I0318 08:48:20.477423 6976 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 08:48:20.484947 master-0 kubenswrapper[6976]: I0318 08:48:20.484890 6976 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 18 08:48:20.484947 master-0 kubenswrapper[6976]: I0318 08:48:20.484932 6976 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 18 08:48:20.485042 master-0 kubenswrapper[6976]: W0318 08:48:20.485029 6976 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 08:48:20.485042 master-0 kubenswrapper[6976]: W0318 08:48:20.485039 6976 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 08:48:20.485108 master-0 kubenswrapper[6976]: W0318 08:48:20.485047 6976 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 08:48:20.485108 master-0 kubenswrapper[6976]: W0318 08:48:20.485055 6976 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 08:48:20.485108 master-0 kubenswrapper[6976]: W0318 08:48:20.485066 6976 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 08:48:20.485108 master-0 kubenswrapper[6976]: W0318 08:48:20.485076 6976 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 08:48:20.485108 master-0 kubenswrapper[6976]: W0318 08:48:20.485084 6976 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 08:48:20.485108 master-0 kubenswrapper[6976]: W0318 08:48:20.485092 6976 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 08:48:20.485108 master-0 kubenswrapper[6976]: W0318 08:48:20.485098 6976 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 08:48:20.485108 master-0 kubenswrapper[6976]: W0318 08:48:20.485106 6976 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 08:48:20.485108 master-0 kubenswrapper[6976]: W0318 08:48:20.485112 6976 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 08:48:20.485382 master-0 kubenswrapper[6976]: W0318 08:48:20.485122 6976 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 08:48:20.485382 master-0 kubenswrapper[6976]: W0318 08:48:20.485130 6976 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 08:48:20.485382 master-0 kubenswrapper[6976]: W0318 08:48:20.485137 6976 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 08:48:20.485382 master-0 kubenswrapper[6976]: W0318 08:48:20.485144 6976 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 08:48:20.485382 master-0 kubenswrapper[6976]: W0318 08:48:20.485152 6976 feature_gate.go:330] unrecognized feature gate: Example Mar 18 08:48:20.485382 master-0 kubenswrapper[6976]: W0318 08:48:20.485159 6976 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 08:48:20.485382 master-0 kubenswrapper[6976]: W0318 08:48:20.485166 6976 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 08:48:20.485382 master-0 kubenswrapper[6976]: W0318 08:48:20.485173 6976 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 08:48:20.485382 master-0 kubenswrapper[6976]: W0318 08:48:20.485180 6976 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 08:48:20.485382 master-0 kubenswrapper[6976]: W0318 08:48:20.485186 6976 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 08:48:20.485382 master-0 kubenswrapper[6976]: W0318 08:48:20.485192 6976 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 08:48:20.485382 master-0 kubenswrapper[6976]: W0318 08:48:20.485197 6976 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 08:48:20.485382 master-0 kubenswrapper[6976]: W0318 08:48:20.485206 6976 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 08:48:20.485382 master-0 kubenswrapper[6976]: W0318 08:48:20.485216 6976 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 08:48:20.485382 master-0 kubenswrapper[6976]: W0318 08:48:20.485223 6976 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 08:48:20.485382 master-0 kubenswrapper[6976]: W0318 08:48:20.485233 6976 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 08:48:20.485382 master-0 kubenswrapper[6976]: W0318 08:48:20.485242 6976 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 08:48:20.485382 master-0 kubenswrapper[6976]: W0318 08:48:20.485250 6976 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 08:48:20.485382 master-0 kubenswrapper[6976]: W0318 08:48:20.485258 6976 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 08:48:20.486065 master-0 kubenswrapper[6976]: W0318 08:48:20.485264 6976 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 08:48:20.486065 master-0 kubenswrapper[6976]: W0318 08:48:20.485273 6976 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 08:48:20.486065 master-0 kubenswrapper[6976]: W0318 08:48:20.485280 6976 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 08:48:20.486065 master-0 kubenswrapper[6976]: W0318 08:48:20.485286 6976 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 08:48:20.486065 master-0 kubenswrapper[6976]: W0318 08:48:20.485292 6976 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 08:48:20.486065 master-0 kubenswrapper[6976]: W0318 08:48:20.485300 6976 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 08:48:20.486065 master-0 kubenswrapper[6976]: W0318 08:48:20.485305 6976 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 08:48:20.486065 master-0 kubenswrapper[6976]: W0318 08:48:20.485311 6976 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 08:48:20.486065 master-0 kubenswrapper[6976]: W0318 08:48:20.485317 6976 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 08:48:20.486065 master-0 kubenswrapper[6976]: W0318 08:48:20.485322 6976 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 08:48:20.486065 master-0 kubenswrapper[6976]: W0318 08:48:20.485327 6976 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 08:48:20.486065 master-0 kubenswrapper[6976]: W0318 08:48:20.485333 6976 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 08:48:20.486065 master-0 kubenswrapper[6976]: W0318 08:48:20.485339 6976 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 08:48:20.486065 master-0 kubenswrapper[6976]: W0318 08:48:20.485345 6976 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 08:48:20.486065 master-0 kubenswrapper[6976]: W0318 08:48:20.485352 6976 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 08:48:20.486065 master-0 kubenswrapper[6976]: W0318 08:48:20.485357 6976 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 08:48:20.486065 master-0 kubenswrapper[6976]: W0318 08:48:20.485363 6976 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 08:48:20.486065 master-0 kubenswrapper[6976]: W0318 08:48:20.485368 6976 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 08:48:20.486065 master-0 kubenswrapper[6976]: W0318 08:48:20.485374 6976 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 08:48:20.486065 master-0 kubenswrapper[6976]: W0318 08:48:20.485379 6976 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 08:48:20.486744 master-0 kubenswrapper[6976]: W0318 08:48:20.485385 6976 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 08:48:20.486744 master-0 kubenswrapper[6976]: W0318 08:48:20.485391 6976 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 08:48:20.486744 master-0 kubenswrapper[6976]: W0318 08:48:20.485396 6976 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 08:48:20.486744 master-0 kubenswrapper[6976]: W0318 08:48:20.485403 6976 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 08:48:20.486744 master-0 kubenswrapper[6976]: W0318 08:48:20.485409 6976 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 08:48:20.486744 master-0 kubenswrapper[6976]: W0318 08:48:20.485414 6976 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 08:48:20.486744 master-0 kubenswrapper[6976]: W0318 08:48:20.485420 6976 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 08:48:20.486744 master-0 kubenswrapper[6976]: W0318 08:48:20.485425 6976 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 08:48:20.486744 master-0 kubenswrapper[6976]: W0318 08:48:20.485431 6976 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 08:48:20.486744 master-0 kubenswrapper[6976]: W0318 08:48:20.485437 6976 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 08:48:20.486744 master-0 kubenswrapper[6976]: W0318 08:48:20.485442 6976 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 08:48:20.486744 master-0 kubenswrapper[6976]: W0318 08:48:20.485447 6976 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 08:48:20.486744 master-0 kubenswrapper[6976]: W0318 08:48:20.485452 6976 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 08:48:20.486744 master-0 kubenswrapper[6976]: W0318 08:48:20.485458 6976 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 08:48:20.486744 master-0 kubenswrapper[6976]: W0318 08:48:20.485466 6976 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 08:48:20.486744 master-0 kubenswrapper[6976]: W0318 08:48:20.485473 6976 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 08:48:20.486744 master-0 kubenswrapper[6976]: W0318 08:48:20.485480 6976 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 08:48:20.486744 master-0 kubenswrapper[6976]: W0318 08:48:20.485486 6976 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 08:48:20.486744 master-0 kubenswrapper[6976]: W0318 08:48:20.485497 6976 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 08:48:20.486744 master-0 kubenswrapper[6976]: W0318 08:48:20.485505 6976 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 08:48:20.487309 master-0 kubenswrapper[6976]: W0318 08:48:20.485513 6976 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 08:48:20.487309 master-0 kubenswrapper[6976]: W0318 08:48:20.485521 6976 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 08:48:20.487309 master-0 kubenswrapper[6976]: I0318 08:48:20.485533 6976 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 08:48:20.487309 master-0 kubenswrapper[6976]: W0318 08:48:20.485774 6976 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 08:48:20.487309 master-0 kubenswrapper[6976]: W0318 08:48:20.485784 6976 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 08:48:20.487309 master-0 kubenswrapper[6976]: W0318 08:48:20.485791 6976 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 08:48:20.487309 master-0 kubenswrapper[6976]: W0318 08:48:20.485797 6976 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 08:48:20.487309 master-0 kubenswrapper[6976]: W0318 08:48:20.485803 6976 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 08:48:20.487309 master-0 kubenswrapper[6976]: W0318 08:48:20.485809 6976 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 08:48:20.487309 master-0 kubenswrapper[6976]: W0318 08:48:20.485814 6976 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 08:48:20.487309 master-0 kubenswrapper[6976]: W0318 08:48:20.485820 6976 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 08:48:20.487309 master-0 kubenswrapper[6976]: W0318 08:48:20.485826 6976 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 08:48:20.487309 master-0 kubenswrapper[6976]: W0318 08:48:20.485831 6976 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 08:48:20.487309 master-0 kubenswrapper[6976]: W0318 08:48:20.485837 6976 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 08:48:20.487309 master-0 kubenswrapper[6976]: W0318 08:48:20.485842 6976 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 08:48:20.487784 master-0 kubenswrapper[6976]: W0318 08:48:20.485847 6976 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 08:48:20.487784 master-0 kubenswrapper[6976]: W0318 08:48:20.485852 6976 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 08:48:20.487784 master-0 kubenswrapper[6976]: W0318 08:48:20.485858 6976 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 08:48:20.487784 master-0 kubenswrapper[6976]: W0318 08:48:20.485864 6976 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 08:48:20.487784 master-0 kubenswrapper[6976]: W0318 08:48:20.485869 6976 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 08:48:20.487784 master-0 kubenswrapper[6976]: W0318 08:48:20.485874 6976 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 08:48:20.487784 master-0 kubenswrapper[6976]: W0318 08:48:20.485880 6976 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 08:48:20.487784 master-0 kubenswrapper[6976]: W0318 08:48:20.485887 6976 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 08:48:20.487784 master-0 kubenswrapper[6976]: W0318 08:48:20.485894 6976 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 08:48:20.487784 master-0 kubenswrapper[6976]: W0318 08:48:20.485901 6976 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 08:48:20.487784 master-0 kubenswrapper[6976]: W0318 08:48:20.485908 6976 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 08:48:20.487784 master-0 kubenswrapper[6976]: W0318 08:48:20.485914 6976 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 08:48:20.487784 master-0 kubenswrapper[6976]: W0318 08:48:20.485920 6976 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 08:48:20.487784 master-0 kubenswrapper[6976]: W0318 08:48:20.485925 6976 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 08:48:20.487784 master-0 kubenswrapper[6976]: W0318 08:48:20.485931 6976 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 08:48:20.487784 master-0 kubenswrapper[6976]: W0318 08:48:20.485952 6976 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 08:48:20.487784 master-0 kubenswrapper[6976]: W0318 08:48:20.485966 6976 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 08:48:20.487784 master-0 kubenswrapper[6976]: W0318 08:48:20.485972 6976 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 08:48:20.487784 master-0 kubenswrapper[6976]: W0318 08:48:20.485979 6976 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 08:48:20.487784 master-0 kubenswrapper[6976]: W0318 08:48:20.485985 6976 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 08:48:20.488363 master-0 kubenswrapper[6976]: W0318 08:48:20.485990 6976 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 08:48:20.488363 master-0 kubenswrapper[6976]: W0318 08:48:20.485996 6976 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 08:48:20.488363 master-0 kubenswrapper[6976]: W0318 08:48:20.486002 6976 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 08:48:20.488363 master-0 kubenswrapper[6976]: W0318 08:48:20.486008 6976 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 08:48:20.488363 master-0 kubenswrapper[6976]: W0318 08:48:20.486016 6976 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 08:48:20.488363 master-0 kubenswrapper[6976]: W0318 08:48:20.486022 6976 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 08:48:20.488363 master-0 kubenswrapper[6976]: W0318 08:48:20.486029 6976 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 08:48:20.488363 master-0 kubenswrapper[6976]: W0318 08:48:20.486034 6976 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 08:48:20.488363 master-0 kubenswrapper[6976]: W0318 08:48:20.486040 6976 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 08:48:20.488363 master-0 kubenswrapper[6976]: W0318 08:48:20.486045 6976 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 08:48:20.488363 master-0 kubenswrapper[6976]: W0318 08:48:20.486051 6976 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 08:48:20.488363 master-0 kubenswrapper[6976]: W0318 08:48:20.486056 6976 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 08:48:20.488363 master-0 kubenswrapper[6976]: W0318 08:48:20.486064 6976 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 08:48:20.488363 master-0 kubenswrapper[6976]: W0318 08:48:20.486071 6976 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 08:48:20.488363 master-0 kubenswrapper[6976]: W0318 08:48:20.486077 6976 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 08:48:20.488363 master-0 kubenswrapper[6976]: W0318 08:48:20.486083 6976 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 08:48:20.488363 master-0 kubenswrapper[6976]: W0318 08:48:20.486090 6976 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 08:48:20.488363 master-0 kubenswrapper[6976]: W0318 08:48:20.486096 6976 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 08:48:20.488363 master-0 kubenswrapper[6976]: W0318 08:48:20.486102 6976 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 08:48:20.488960 master-0 kubenswrapper[6976]: W0318 08:48:20.486109 6976 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 08:48:20.488960 master-0 kubenswrapper[6976]: W0318 08:48:20.486116 6976 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 08:48:20.488960 master-0 kubenswrapper[6976]: W0318 08:48:20.486121 6976 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 08:48:20.488960 master-0 kubenswrapper[6976]: W0318 08:48:20.486128 6976 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 08:48:20.488960 master-0 kubenswrapper[6976]: W0318 08:48:20.486134 6976 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 08:48:20.488960 master-0 kubenswrapper[6976]: W0318 08:48:20.486140 6976 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 08:48:20.488960 master-0 kubenswrapper[6976]: W0318 08:48:20.486146 6976 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 08:48:20.488960 master-0 kubenswrapper[6976]: W0318 08:48:20.486151 6976 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 08:48:20.488960 master-0 kubenswrapper[6976]: W0318 08:48:20.486157 6976 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 08:48:20.488960 master-0 kubenswrapper[6976]: W0318 08:48:20.486162 6976 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 08:48:20.488960 master-0 kubenswrapper[6976]: W0318 08:48:20.486167 6976 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 08:48:20.488960 master-0 kubenswrapper[6976]: W0318 08:48:20.486172 6976 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 08:48:20.488960 master-0 kubenswrapper[6976]: W0318 08:48:20.486178 6976 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 08:48:20.488960 master-0 kubenswrapper[6976]: W0318 08:48:20.486184 6976 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 08:48:20.488960 master-0 kubenswrapper[6976]: W0318 08:48:20.486190 6976 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 08:48:20.488960 master-0 kubenswrapper[6976]: W0318 08:48:20.486196 6976 feature_gate.go:330] unrecognized feature gate: Example Mar 18 08:48:20.488960 master-0 kubenswrapper[6976]: W0318 08:48:20.486202 6976 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 08:48:20.488960 master-0 kubenswrapper[6976]: W0318 08:48:20.486208 6976 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 08:48:20.488960 master-0 kubenswrapper[6976]: W0318 08:48:20.486213 6976 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 08:48:20.488960 master-0 kubenswrapper[6976]: W0318 08:48:20.486218 6976 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 08:48:20.489534 master-0 kubenswrapper[6976]: W0318 08:48:20.486223 6976 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 08:48:20.489534 master-0 kubenswrapper[6976]: I0318 08:48:20.486231 6976 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 08:48:20.489534 master-0 kubenswrapper[6976]: I0318 08:48:20.486468 6976 server.go:940] "Client rotation is on, will bootstrap in background" Mar 18 08:48:20.489534 master-0 kubenswrapper[6976]: I0318 08:48:20.488540 6976 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 18 08:48:20.489534 master-0 kubenswrapper[6976]: I0318 08:48:20.488656 6976 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 18 08:48:20.489534 master-0 kubenswrapper[6976]: I0318 08:48:20.488959 6976 server.go:997] "Starting client certificate rotation" Mar 18 08:48:20.489534 master-0 kubenswrapper[6976]: I0318 08:48:20.488972 6976 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 18 08:48:20.489534 master-0 kubenswrapper[6976]: I0318 08:48:20.489274 6976 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-19 08:38:39 +0000 UTC, rotation deadline is 2026-03-19 02:39:01.374944986 +0000 UTC Mar 18 08:48:20.489534 master-0 kubenswrapper[6976]: I0318 08:48:20.489437 6976 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 17h50m40.885514053s for next certificate rotation Mar 18 08:48:20.489932 master-0 kubenswrapper[6976]: I0318 08:48:20.489898 6976 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 08:48:20.491849 master-0 kubenswrapper[6976]: I0318 08:48:20.491768 6976 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 08:48:20.495457 master-0 kubenswrapper[6976]: I0318 08:48:20.495425 6976 log.go:25] "Validated CRI v1 runtime API" Mar 18 08:48:20.498339 master-0 kubenswrapper[6976]: I0318 08:48:20.498293 6976 log.go:25] "Validated CRI v1 image API" Mar 18 08:48:20.501194 master-0 kubenswrapper[6976]: I0318 08:48:20.501074 6976 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 18 08:48:20.505935 master-0 kubenswrapper[6976]: I0318 08:48:20.505853 6976 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 c54ba44d-560c-4408-b24b-989ec8b7c22d:/dev/vda3] Mar 18 08:48:20.506321 master-0 kubenswrapper[6976]: I0318 08:48:20.505894 6976 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/111cad7658297e20b09bb8a0322469d96e6944a07fa207a070a18f99b4bbfc85/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/111cad7658297e20b09bb8a0322469d96e6944a07fa207a070a18f99b4bbfc85/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2078b34d8e519a78ac9e8ea0c87c5a7d54f6ce3303c5c0ec38a03f5d0d12a9d1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2078b34d8e519a78ac9e8ea0c87c5a7d54f6ce3303c5c0ec38a03f5d0d12a9d1/userdata/shm major:0 minor:99 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2268116be19023b1c8385358efae4da2f05525a23575585605fbe5052dde322b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2268116be19023b1c8385358efae4da2f05525a23575585605fbe5052dde322b/userdata/shm major:0 minor:261 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/26feed0c101f6d451867599cf55613a680653ef7d844a071df5d94dd231f464f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/26feed0c101f6d451867599cf55613a680653ef7d844a071df5d94dd231f464f/userdata/shm major:0 minor:253 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3827efb6815dbb16a6fe46aec77900fafde56c2e8c5cdf8a95de12d8f38843f8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3827efb6815dbb16a6fe46aec77900fafde56c2e8c5cdf8a95de12d8f38843f8/userdata/shm major:0 minor:275 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4a9c798432c4910d57904b2bd4d441bf0df0839546f138cc70e48ec5d9012c6a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4a9c798432c4910d57904b2bd4d441bf0df0839546f138cc70e48ec5d9012c6a/userdata/shm major:0 minor:255 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/57683f550936db1e32a4ce9e0772053116a76decd109678101700be85f0fac15/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/57683f550936db1e32a4ce9e0772053116a76decd109678101700be85f0fac15/userdata/shm major:0 minor:241 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/81206bcb1647b4c08efbfd17569fc7b2653680bdd8064b363b1831673479ee1e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/81206bcb1647b4c08efbfd17569fc7b2653680bdd8064b363b1831673479ee1e/userdata/shm major:0 minor:148 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8635320a4b36d9fe143361fc99701a8c79f38835939cb2e0d3b2cf8ebf88349b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8635320a4b36d9fe143361fc99701a8c79f38835939cb2e0d3b2cf8ebf88349b/userdata/shm major:0 minor:271 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8e583348603a749d4e556bba036d041822b1ae41ee887fca821dc33edae65947/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8e583348603a749d4e556bba036d041822b1ae41ee887fca821dc33edae65947/userdata/shm major:0 minor:250 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9d66a0e1a66af3412b18eaf6bb7d49b378aad4df6e4a3ab8703f0492b2a8b438/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9d66a0e1a66af3412b18eaf6bb7d49b378aad4df6e4a3ab8703f0492b2a8b438/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9eeddb357e31077214bc6fef49178f88b5d294912702d649ea4a30b26a11e0ed/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9eeddb357e31077214bc6fef49178f88b5d294912702d649ea4a30b26a11e0ed/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a31ec04000ce15454ff93da1265406bc1c4ac93430867ce448a6dc1acd1c1097/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a31ec04000ce15454ff93da1265406bc1c4ac93430867ce448a6dc1acd1c1097/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ad6bcec915e6b33c36d0b67453718438f61e0a96034cf76c3052d8a1d9e3df06/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ad6bcec915e6b33c36d0b67453718438f61e0a96034cf76c3052d8a1d9e3df06/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b949a573f922a32d775357bfcb732ec6a7748990727195e8e189e898f3802768/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b949a573f922a32d775357bfcb732ec6a7748990727195e8e189e898f3802768/userdata/shm major:0 minor:55 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b988232227aa085a178c31ee083231aab09e2347dc1af469f15feb00c41b0d1d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b988232227aa085a178c31ee083231aab09e2347dc1af469f15feb00c41b0d1d/userdata/shm major:0 minor:263 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ba34b3933aeb088c8a44bf92497699577ba58b333ed292879af37568495f962f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ba34b3933aeb088c8a44bf92497699577ba58b333ed292879af37568495f962f/userdata/shm major:0 minor:276 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c445746454631d8ce061d0857763b308446517ac6a8ca09e1933cec8fcfb6a97/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c445746454631d8ce061d0857763b308446517ac6a8ca09e1933cec8fcfb6a97/userdata/shm major:0 minor:238 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c5f7a3fa8c5b36c0a6bb152021a58a30801cd5a1c1d2647cef87fa01a0bdd128/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c5f7a3fa8c5b36c0a6bb152021a58a30801cd5a1c1d2647cef87fa01a0bdd128/userdata/shm major:0 minor:114 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d6ff7b83413c43450a6bf628dcc2a6106bc260e7200bd01ce6f1ed9cc232ecc2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d6ff7b83413c43450a6bf628dcc2a6106bc260e7200bd01ce6f1ed9cc232ecc2/userdata/shm major:0 minor:264 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e9783b2b210b8a83d070b181fdbb2da8ba234da764756739e3354c4aa2f2e32b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e9783b2b210b8a83d070b181fdbb2da8ba234da764756739e3354c4aa2f2e32b/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f81c411903140f1ed67af182269cee687c3cf33776c637366fe64b8e9cc8279e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f81c411903140f1ed67af182269cee687c3cf33776c637366fe64b8e9cc8279e/userdata/shm major:0 minor:258 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fd3388055ed633bef8e022a8b09742a25d6085b3bb671bd2342375ed6f18da63/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fd3388055ed633bef8e022a8b09742a25d6085b3bb671bd2342375ed6f18da63/userdata/shm major:0 minor:278 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ff2fe1000ac68ef028cd1879d1c2cf197302bdbc2feff6610b1fe10df6c0bb6d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ff2fe1000ac68ef028cd1879d1c2cf197302bdbc2feff6610b1fe10df6c0bb6d/userdata/shm major:0 minor:51 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/09269324-c908-474d-818f-5cd49406f1e2/volumes/kubernetes.io~projected/kube-api-access-94zpt:{mountpoint:/var/lib/kubelet/pods/09269324-c908-474d-818f-5cd49406f1e2/volumes/kubernetes.io~projected/kube-api-access-94zpt major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0f6a7f55-84bd-4ea5-8248-4cb565904c3b/volumes/kubernetes.io~projected/kube-api-access-lnfwv:{mountpoint:/var/lib/kubelet/pods/0f6a7f55-84bd-4ea5-8248-4cb565904c3b/volumes/kubernetes.io~projected/kube-api-access-lnfwv major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0f6a7f55-84bd-4ea5-8248-4cb565904c3b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0f6a7f55-84bd-4ea5-8248-4cb565904c3b/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0f9ba06c-7a6b-4f46-a747-80b0a0b58600/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/0f9ba06c-7a6b-4f46-a747-80b0a0b58600/volumes/kubernetes.io~projected/kube-api-access major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0f9ba06c-7a6b-4f46-a747-80b0a0b58600/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0f9ba06c-7a6b-4f46-a747-80b0a0b58600/volumes/kubernetes.io~secret/serving-cert major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1deb139f-1903-417e-835c-28abdd156cdb/volumes/kubernetes.io~projected/kube-api-access-dkmb4:{mountpoint:/var/lib/kubelet/pods/1deb139f-1903-417e-835c-28abdd156cdb/volumes/kubernetes.io~projected/kube-api-access-dkmb4 major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac/volumes/kubernetes.io~projected/kube-api-access major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac/volumes/kubernetes.io~secret/serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2d0da6e3-3887-4361-8eae-e7447f9ff72c/volumes/kubernetes.io~projected/kube-api-access-xkw45:{mountpoint:/var/lib/kubelet/pods/2d0da6e3-3887-4361-8eae-e7447f9ff72c/volumes/kubernetes.io~projected/kube-api-access-xkw45 major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4192ea44-a38c-4b70-93c3-8070da2ffe2f/volumes/kubernetes.io~projected/kube-api-access-gp84d:{mountpoint:/var/lib/kubelet/pods/4192ea44-a38c-4b70-93c3-8070da2ffe2f/volumes/kubernetes.io~projected/kube-api-access-gp84d major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/57affd8b-d1ce-40d2-b31e-7b18645ca7b6/volumes/kubernetes.io~projected/kube-api-access-fnzhn:{mountpoint:/var/lib/kubelet/pods/57affd8b-d1ce-40d2-b31e-7b18645ca7b6/volumes/kubernetes.io~projected/kube-api-access-fnzhn major:0 minor:140 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/57affd8b-d1ce-40d2-b31e-7b18645ca7b6/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/57affd8b-d1ce-40d2-b31e-7b18645ca7b6/volumes/kubernetes.io~secret/webhook-cert major:0 minor:141 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5f827195-f68d-4bd2-865b-a1f041a5c73e/volumes/kubernetes.io~projected/kube-api-access-jndvw:{mountpoint:/var/lib/kubelet/pods/5f827195-f68d-4bd2-865b-a1f041a5c73e/volumes/kubernetes.io~projected/kube-api-access-jndvw major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5f827195-f68d-4bd2-865b-a1f041a5c73e/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/5f827195-f68d-4bd2-865b-a1f041a5c73e/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/600c92a1-56c5-497b-a8f0-746830f4180e/volumes/kubernetes.io~projected/kube-api-access-m9mh7:{mountpoint:/var/lib/kubelet/pods/600c92a1-56c5-497b-a8f0-746830f4180e/volumes/kubernetes.io~projected/kube-api-access-m9mh7 major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/65cff83a-8d8f-4e4f-96ef-99941c29ba53/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/65cff83a-8d8f-4e4f-96ef-99941c29ba53/volumes/kubernetes.io~projected/kube-api-access major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/65cff83a-8d8f-4e4f-96ef-99941c29ba53/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/65cff83a-8d8f-4e4f-96ef-99941c29ba53/volumes/kubernetes.io~secret/serving-cert major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c56e1ac-8752-4e46-8692-93716087f0e0/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/6c56e1ac-8752-4e46-8692-93716087f0e0/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c56e1ac-8752-4e46-8692-93716087f0e0/volumes/kubernetes.io~projected/kube-api-access-rppm6:{mountpoint:/var/lib/kubelet/pods/6c56e1ac-8752-4e46-8692-93716087f0e0/volumes/kubernetes.io~projected/kube-api-access-rppm6 major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7cac1300-44c1-4a7d-8d14-efa9702ad9df/volumes/kubernetes.io~projected/kube-api-access-7dn5k:{mountpoint:/var/lib/kubelet/pods/7cac1300-44c1-4a7d-8d14-efa9702ad9df/volumes/kubernetes.io~projected/kube-api-access-7dn5k major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7cac1300-44c1-4a7d-8d14-efa9702ad9df/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/7cac1300-44c1-4a7d-8d14-efa9702ad9df/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac/volumes/kubernetes.io~projected/kube-api-access-77sfj:{mountpoint:/var/lib/kubelet/pods/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac/volumes/kubernetes.io~projected/kube-api-access-77sfj major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/81eefe1b-f683-4740-8fb0-0a5050f9b4a4/volumes/kubernetes.io~projected/kube-api-access-qkkcv:{mountpoint:/var/lib/kubelet/pods/81eefe1b-f683-4740-8fb0-0a5050f9b4a4/volumes/kubernetes.io~projected/kube-api-access-qkkcv major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/81eefe1b-f683-4740-8fb0-0a5050f9b4a4/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/81eefe1b-f683-4740-8fb0-0a5050f9b4a4/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/85d361a2-3f83-4857-b96e-3e98fcf33463/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/85d361a2-3f83-4857-b96e-3e98fcf33463/volumes/kubernetes.io~projected/kube-api-access major:0 minor:98 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b/volumes/kubernetes.io~projected/kube-api-access-wxgx6:{mountpoint:/var/lib/kubelet/pods/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b/volumes/kubernetes.io~projected/kube-api-access-wxgx6 major:0 minor:94 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8dacdedc-c6ad-40d4-afdc-59a31be417fe/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/8dacdedc-c6ad-40d4-afdc-59a31be417fe/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8dacdedc-c6ad-40d4-afdc-59a31be417fe/volumes/kubernetes.io~projected/kube-api-access-g97kq:{mountpoint:/var/lib/kubelet/pods/8dacdedc-c6ad-40d4-afdc-59a31be417fe/volumes/kubernetes.io~projected/kube-api-access-g97kq major:0 minor:168 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8dacdedc-c6ad-40d4-afdc-59a31be417fe/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/8dacdedc-c6ad-40d4-afdc-59a31be417fe/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:166 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/95143c61-6f91-4cd4-9411-31c2fb75d4d0/volumes/kubernetes.io~projected/kube-api-access-8t9rq:{mountpoint:/var/lib/kubelet/pods/95143c61-6f91-4cd4-9411-31c2fb75d4d0/volumes/kubernetes.io~projected/kube-api-access-8t9rq major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/95143c61-6f91-4cd4-9411-31c2fb75d4d0/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/95143c61-6f91-4cd4-9411-31c2fb75d4d0/volumes/kubernetes.io~secret/serving-cert major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/af1fbcf2-d4de-4015-89fc-2565e855a04d/volumes/kubernetes.io~projected/kube-api-access-r5svd:{mountpoint:/var/lib/kubelet/pods/af1fbcf2-d4de-4015-89fc-2565e855a04d/volumes/kubernetes.io~projected/kube-api-access-r5svd major:0 minor:105 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd/volumes/kubernetes.io~projected/kube-api-access-nmv75:{mountpoint:/var/lib/kubelet/pods/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd/volumes/kubernetes.io~projected/kube-api-access-nmv75 major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd/volumes/kubernetes.io~secret/serving-cert major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bb6ef4c4-bff3-4559-8e42-582bbd668b7c/volumes/kubernetes.io~projected/kube-api-access-f2mj5:{mountpoint:/var/lib/kubelet/pods/bb6ef4c4-bff3-4559-8e42-582bbd668b7c/volumes/kubernetes.io~projected/kube-api-access-f2mj5 major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bb6ef4c4-bff3-4559-8e42-582bbd668b7c/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/bb6ef4c4-bff3-4559-8e42-582bbd668b7c/volumes/kubernetes.io~secret/etcd-client major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bb6ef4c4-bff3-4559-8e42-582bbd668b7c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/bb6ef4c4-bff3-4559-8e42-582bbd668b7c/volumes/kubernetes.io~secret/serving-cert major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be2682e4-cb63-4102-a83e-ef28023e273a/volumes/kubernetes.io~projected/kube-api-access-nmztj:{mountpoint:/var/lib/kubelet/pods/be2682e4-cb63-4102-a83e-ef28023e273a/volumes/kubernetes.io~projected/kube-api-access-nmztj major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be2682e4-cb63-4102-a83e-ef28023e273a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/be2682e4-cb63-4102-a83e-ef28023e273a/volumes/kubernetes.io~secret/serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf7a3329-a04c-4b58-9364-b907c00cbe08/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/bf7a3329-a04c-4b58-9364-b907c00cbe08/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf7a3329-a04c-4b58-9364-b907c00cbe08/volumes/kubernetes.io~projected/kube-api-access-2plvj:{mountpoint:/var/lib/kubelet/pods/bf7a3329-a04c-4b58-9364-b907c00cbe08/volumes/kubernetes.io~projected/kube-api-access-2plvj major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c00ee838-424f-482b-942f-08f0952a5ccd/volumes/kubernetes.io~projected/kube-api-access-9w4w9:{mountpoint:/var/lib/kubelet/pods/c00ee838-424f-482b-942f-08f0952a5ccd/volumes/kubernetes.io~projected/kube-api-access-9w4w9 major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c5c995cf-40a0-4cd6-87fa-96a522f7bc57/volumes/kubernetes.io~projected/kube-api-access-rm2rc:{mountpoint:/var/lib/kubelet/pods/c5c995cf-40a0-4cd6-87fa-96a522f7bc57/volumes/kubernetes.io~projected/kube-api-access-rm2rc major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ca9d4694-8675-47c5-819f-89bba9dcdc0f/volumes/kubernetes.io~projected/kube-api-access-rx9dd:{mountpoint:/var/lib/kubelet/pods/ca9d4694-8675-47c5-819f-89bba9dcdc0f/volumes/kubernetes.io~projected/kube-api-access-rx9dd major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e48101ca-f356-45e3-93d7-4e17b8d8066c/volumes/kubernetes.io~projected/kube-api-access-47cpd:{mountpoint:/var/lib/kubelet/pods/e48101ca-f356-45e3-93d7-4e17b8d8066c/volumes/kubernetes.io~projected/kube-api-access-47cpd major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e86268c9-7a83-4ccb-979a-feff00cb4b3e/volumes/kubernetes.io~projected/kube-api-access-ptdsp:{mountpoint:/var/lib/kubelet/pods/e86268c9-7a83-4ccb-979a-feff00cb4b3e/volumes/kubernetes.io~projected/kube-api-access-ptdsp major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e86268c9-7a83-4ccb-979a-feff00cb4b3e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e86268c9-7a83-4ccb-979a-feff00cb4b3e/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f6833a48-fccb-42bd-ac90-29f08d5bf7e8/volumes/kubernetes.io~projected/kube-api-access-257nx:{mountpoint:/var/lib/kubelet/pods/f6833a48-fccb-42bd-ac90-29f08d5bf7e8/volumes/kubernetes.io~projected/kube-api-access-257nx major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fdd2f1fd-1a94-4f4e-a275-b075f432f763/volumes/kubernetes.io~projected/kube-api-access-fqfdm:{mountpoint:/var/lib/kubelet/pods/fdd2f1fd-1a94-4f4e-a275-b075f432f763/volumes/kubernetes.io~projected/kube-api-access-fqfdm major:0 minor:118 fsType:tmpfs blockSize:0} overlay_0-101:{mountpoint:/var/lib/containers/storage/overlay/4c47e45992f82bd9fc61b04be52f443613e73f85006cb4b165d67f2196aea83a/merged major:0 minor:101 fsType:overlay blockSize:0} overlay_0-103:{mountpoint:/var/lib/containers/storage/overlay/8d8efcf55379a4100d7208c661f085206aef74963806c7476ff42be27a92696f/merged major:0 minor:103 fsType:overlay blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/bdfe39f95d3b202f00c923753d9d6890b29826f71e00e044936053c5e2ec7a15/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/7dc33dfb2a77c31b29b0bb54d2d282a115cd194ed938ef994cea975254202bb3/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/1c6466b3628fc565136e0691e17ce5c46d7a0133cb5860fad1a0ca51f87710cd/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-126:{mountpoint:/var/lib/containers/storage/overlay/0d8bced82840a9f9232793a0678f33fec9473c217bab4b0586dfc496dd03649d/merged major:0 minor:126 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/fd8f02f1c530e2c8c730ebbd0d20a130e5e6d6a2c5fe3aa8207d53edf40ad82d/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/2c52541ce41297dde53f2ee5e42c61db071bdf528e3cda352f0bc772ecd71eef/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-138:{mountpoint:/var/lib/containers/storage/overlay/785a2543258fcfac11fc02e001bc0a5c78666939b0636940db03e285ef383217/merged major:0 minor:138 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/f010b2d60901e3cf4c0f602d88d6dab6b4feadfa717dbf721e7dde0ed06810be/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/db69fb55ac1479baf170ab3b46e4903d27ebaa5a23776b76643c14c3d426a1a8/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/1f13f53c83c73fbf3914ab318bf052924f7c4f22826bea9edc1966832dbd1558/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-162:{mountpoint:/var/lib/containers/storage/overlay/e398eb315bd3288d9dd8c9e63ce649fba4d300b3bf53a12b8ce06b045a0ecb14/merged major:0 minor:162 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/104f5eb7f9ffe4ce970eba484b8db25309358a79be39dd34f7a839dab2c56b60/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/543706edf07f3320d5cd38e6e5ca61a30302ad8252a080bf2635e0e1a5c63f07/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/80d2fb3dbde771e01246c44d90576409e83b30249c89b0510346f29557a8c336/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/d602aa7da6a39395e493db7a42692a0debdee6bb1b4b10910d9d16bc383eb04a/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/abdc16823e0755d0654995a3654a683b86460077380b982fc7135adb48e04154/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/82756b23d322695dbcbaf260593537cf6842c7d4f04e58a3cac96424d6bcce13/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/e235d4ec4b3544321e4cf270b88803b29cd98a620827c49a1989f2a088497284/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-199:{mountpoint:/var/lib/containers/storage/overlay/f226b950c0151f3df63dde124850a5d8d6cec8b0715a21c93700dd09b23486f8/merged major:0 minor:199 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/86628707f516630330e88157bb4daa0caf89f180360072e868b9813deb89bc02/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-267:{mountpoint:/var/lib/containers/storage/overlay/61ceae7dc64e24f4b055facb1c2117de17e6e83f4a9b462ff96469f2d19822a5/merged major:0 minor:267 fsType:overlay blockSize:0} overlay_0-269:{mountpoint:/var/lib/containers/storage/overlay/34e4ce1e5bf2b2306a27b6629fd355bb46d2e89cc5e8ea68e68396be1fbd1a03/merged major:0 minor:269 fsType:overlay blockSize:0} overlay_0-272:{mountpoint:/var/lib/containers/storage/overlay/7e958d9b28805626ece5258339e6f7740d66159b568cd33c035744020f42dd04/merged major:0 minor:272 fsType:overlay blockSize:0} overlay_0-281:{mountpoint:/var/lib/containers/storage/overlay/e85533abfb21e7efd7dda7407ac071c6bf9c7636093ce3be21187b402c7515ee/merged major:0 minor:281 fsType:overlay blockSize:0} overlay_0-283:{mountpoint:/var/lib/containers/storage/overlay/fa13797f8c0a37316cec34b029e6bbd39a7a34a19f44c335031ab6a0caead5e9/merged major:0 minor:283 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/1159ffd45426420252c966924c6674dbedebf495156aba3dd158c1a42996ad41/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/bb18310434caec036386994e64eb73d9e44ff1b076f60e12a8109091c24c3b5e/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/9e01d3cb6fdae6d1f0f8adfc8dd5c7e404a31c6b5332a7ec70cf7499626001df/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-291:{mountpoint:/var/lib/containers/storage/overlay/48dc29145c1cbff342048d47f991e442f5510e3c2a675475f02415a50b805b8d/merged major:0 minor:291 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/42e8fce422734819f1a754be3efac84d55c41d508b911c2777eadf3872014faf/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/c957d50d95dada146baeb76503e396143aae1aec0eb39ea39103eb81c6b8d0b1/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/665bf0b08231fae7a3a5e9a4091b95d34bdc882da29b27f8c997485edd416742/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/1e200f61e8a5c8b96373cd8ea00b5a939b4d0d4af817763eaa5ded78dc099a1f/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/5c161f0b6046f7c1df35b0458d48c972ad1df1016de839f99a4f7c4432c19040/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/containers/storage/overlay/33bae1441873e47d361a25e2ae65e85d9bb8e8c69e33f36ffea7bd5484698507/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/fe2bf3dd4f4e42d19038e971c1bf994db8de0b00b5635354e54dd076464c702e/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-53:{mountpoint:/var/lib/containers/storage/overlay/a0ccc0363a504404011ad7c6806ddfea08d8f271c81d4dda0e9bf1e588b7a464/merged major:0 minor:53 fsType:overlay blockSize:0} overlay_0-58:{mountpoint:/var/lib/containers/storage/overlay/7dfbe2eaa24d81f776c1e4dc36f984a6d76bcfa2c789cfe2ed84ad9e4ff2600e/merged major:0 minor:58 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/da78c30559d5b0fc50772740f4d04b8bd7b779c75bfc4be41ebf2335222a96e7/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/f173634f6ed330b042bdfd0dce3052a0136318952d16112ce57a1765c0e8a930/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/bfa15c70a1837e285f9bfd7d0b819e908f5133988f5f5f6f3af3cc3f908f942f/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-69:{mountpoint:/var/lib/containers/storage/overlay/dd6c48d9fde20b65ab2441fb1f03ef2fefd802c7490e5caa8af80a598d3fd126/merged major:0 minor:69 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/64863e909a1f61396d108b56938dfeca35ead374593410a5d274ffc0950f1f48/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-76:{mountpoint:/var/lib/containers/storage/overlay/2d6cc5f1cc63006911368c6f108296409a9da93e5ca1b5717bc8181fd38b78ea/merged major:0 minor:76 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/c473f9b17374c16754b18d1c62eef56f5316facc331cfe9b0f8be3550725e27e/merged major:0 minor:82 fsType:overlay blockSize:0} overlay_0-86:{mountpoint:/var/lib/containers/storage/overlay/b252c68acaa656649999dec6d99384b28c58af9e97282b0d227bb5a210b07e12/merged major:0 minor:86 fsType:overlay blockSize:0}] Mar 18 08:48:20.530783 master-0 kubenswrapper[6976]: I0318 08:48:20.530143 6976 manager.go:217] Machine: {Timestamp:2026-03-18 08:48:20.529034044 +0000 UTC m=+0.114635659 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:a182270b4b4e4574b525d56213aa67ea SystemUUID:a182270b-4b4e-4574-b525-d56213aa67ea BootID:c890c208-5a3a-4b66-9a9b-e57ae2c6aae9 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-199 DeviceMajor:0 DeviceMinor:199 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c00ee838-424f-482b-942f-08f0952a5ccd/volumes/kubernetes.io~projected/kube-api-access-9w4w9 DeviceMajor:0 DeviceMinor:231 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0f9ba06c-7a6b-4f46-a747-80b0a0b58600/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:245 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/95143c61-6f91-4cd4-9411-31c2fb75d4d0/volumes/kubernetes.io~projected/kube-api-access-8t9rq DeviceMajor:0 DeviceMinor:233 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3827efb6815dbb16a6fe46aec77900fafde56c2e8c5cdf8a95de12d8f38843f8/userdata/shm DeviceMajor:0 DeviceMinor:275 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/f6833a48-fccb-42bd-ac90-29f08d5bf7e8/volumes/kubernetes.io~projected/kube-api-access-257nx DeviceMajor:0 DeviceMinor:230 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/57683f550936db1e32a4ce9e0772053116a76decd109678101700be85f0fac15/userdata/shm DeviceMajor:0 DeviceMinor:241 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-162 DeviceMajor:0 DeviceMinor:162 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/95143c61-6f91-4cd4-9411-31c2fb75d4d0/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:228 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b988232227aa085a178c31ee083231aab09e2347dc1af469f15feb00c41b0d1d/userdata/shm DeviceMajor:0 DeviceMinor:263 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/e48101ca-f356-45e3-93d7-4e17b8d8066c/volumes/kubernetes.io~projected/kube-api-access-47cpd DeviceMajor:0 DeviceMinor:123 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0f6a7f55-84bd-4ea5-8248-4cb565904c3b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/09269324-c908-474d-818f-5cd49406f1e2/volumes/kubernetes.io~projected/kube-api-access-94zpt DeviceMajor:0 DeviceMinor:240 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b949a573f922a32d775357bfcb732ec6a7748990727195e8e189e898f3802768/userdata/shm DeviceMajor:0 DeviceMinor:55 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/af1fbcf2-d4de-4015-89fc-2565e855a04d/volumes/kubernetes.io~projected/kube-api-access-r5svd DeviceMajor:0 DeviceMinor:105 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4a9c798432c4910d57904b2bd4d441bf0df0839546f138cc70e48ec5d9012c6a/userdata/shm DeviceMajor:0 DeviceMinor:255 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-269 DeviceMajor:0 DeviceMinor:269 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-291 DeviceMajor:0 DeviceMinor:291 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7cac1300-44c1-4a7d-8d14-efa9702ad9df/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:221 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/fdd2f1fd-1a94-4f4e-a275-b075f432f763/volumes/kubernetes.io~projected/kube-api-access-fqfdm DeviceMajor:0 DeviceMinor:118 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/2d0da6e3-3887-4361-8eae-e7447f9ff72c/volumes/kubernetes.io~projected/kube-api-access-xkw45 DeviceMajor:0 DeviceMinor:223 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-86 DeviceMajor:0 DeviceMinor:86 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8dacdedc-c6ad-40d4-afdc-59a31be417fe/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c445746454631d8ce061d0857763b308446517ac6a8ca09e1933cec8fcfb6a97/userdata/shm DeviceMajor:0 DeviceMinor:238 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/1deb139f-1903-417e-835c-28abdd156cdb/volumes/kubernetes.io~projected/kube-api-access-dkmb4 DeviceMajor:0 DeviceMinor:248 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/57affd8b-d1ce-40d2-b31e-7b18645ca7b6/volumes/kubernetes.io~projected/kube-api-access-fnzhn DeviceMajor:0 DeviceMinor:140 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/8dacdedc-c6ad-40d4-afdc-59a31be417fe/volumes/kubernetes.io~projected/kube-api-access-g97kq DeviceMajor:0 DeviceMinor:168 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/5f827195-f68d-4bd2-865b-a1f041a5c73e/volumes/kubernetes.io~projected/kube-api-access-jndvw DeviceMajor:0 DeviceMinor:227 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7cac1300-44c1-4a7d-8d14-efa9702ad9df/volumes/kubernetes.io~projected/kube-api-access-7dn5k DeviceMajor:0 DeviceMinor:125 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/81206bcb1647b4c08efbfd17569fc7b2653680bdd8064b363b1831673479ee1e/userdata/shm DeviceMajor:0 DeviceMinor:148 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/600c92a1-56c5-497b-a8f0-746830f4180e/volumes/kubernetes.io~projected/kube-api-access-m9mh7 DeviceMajor:0 DeviceMinor:260 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9eeddb357e31077214bc6fef49178f88b5d294912702d649ea4a30b26a11e0ed/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b/volumes/kubernetes.io~projected/kube-api-access-wxgx6 DeviceMajor:0 DeviceMinor:94 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/5f827195-f68d-4bd2-865b-a1f041a5c73e/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d6ff7b83413c43450a6bf628dcc2a6106bc260e7200bd01ce6f1ed9cc232ecc2/userdata/shm DeviceMajor:0 DeviceMinor:264 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2078b34d8e519a78ac9e8ea0c87c5a7d54f6ce3303c5c0ec38a03f5d0d12a9d1/userdata/shm DeviceMajor:0 DeviceMinor:99 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-103 DeviceMajor:0 DeviceMinor:103 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0f6a7f55-84bd-4ea5-8248-4cb565904c3b/volumes/kubernetes.io~projected/kube-api-access-lnfwv DeviceMajor:0 DeviceMinor:235 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-281 DeviceMajor:0 DeviceMinor:281 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4192ea44-a38c-4b70-93c3-8070da2ffe2f/volumes/kubernetes.io~projected/kube-api-access-gp84d DeviceMajor:0 DeviceMinor:247 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8635320a4b36d9fe143361fc99701a8c79f38835939cb2e0d3b2cf8ebf88349b/userdata/shm DeviceMajor:0 DeviceMinor:271 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/85d361a2-3f83-4857-b96e-3e98fcf33463/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:98 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/be2682e4-cb63-4102-a83e-ef28023e273a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/e86268c9-7a83-4ccb-979a-feff00cb4b3e/volumes/kubernetes.io~projected/kube-api-access-ptdsp DeviceMajor:0 DeviceMinor:219 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e86268c9-7a83-4ccb-979a-feff00cb4b3e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/bb6ef4c4-bff3-4559-8e42-582bbd668b7c/volumes/kubernetes.io~projected/kube-api-access-f2mj5 DeviceMajor:0 DeviceMinor:252 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a31ec04000ce15454ff93da1265406bc1c4ac93430867ce448a6dc1acd1c1097/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-138 DeviceMajor:0 DeviceMinor:138 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c5c995cf-40a0-4cd6-87fa-96a522f7bc57/volumes/kubernetes.io~projected/kube-api-access-rm2rc DeviceMajor:0 DeviceMinor:229 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9d66a0e1a66af3412b18eaf6bb7d49b378aad4df6e4a3ab8703f0492b2a8b438/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:224 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/bb6ef4c4-bff3-4559-8e42-582bbd668b7c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:244 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/26feed0c101f6d451867599cf55613a680653ef7d844a071df5d94dd231f464f/userdata/shm DeviceMajor:0 DeviceMinor:253 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/111cad7658297e20b09bb8a0322469d96e6944a07fa207a070a18f99b4bbfc85/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/bb6ef4c4-bff3-4559-8e42-582bbd668b7c/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:246 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f81c411903140f1ed67af182269cee687c3cf33776c637366fe64b8e9cc8279e/userdata/shm DeviceMajor:0 DeviceMinor:258 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/6c56e1ac-8752-4e46-8692-93716087f0e0/volumes/kubernetes.io~projected/kube-api-access-rppm6 DeviceMajor:0 DeviceMinor:236 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/65cff83a-8d8f-4e4f-96ef-99941c29ba53/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:243 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ba34b3933aeb088c8a44bf92497699577ba58b333ed292879af37568495f962f/userdata/shm DeviceMajor:0 DeviceMinor:276 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e9783b2b210b8a83d070b181fdbb2da8ba234da764756739e3354c4aa2f2e32b/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c5f7a3fa8c5b36c0a6bb152021a58a30801cd5a1c1d2647cef87fa01a0bdd128/userdata/shm DeviceMajor:0 DeviceMinor:114 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/57affd8b-d1ce-40d2-b31e-7b18645ca7b6/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:141 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0f9ba06c-7a6b-4f46-a747-80b0a0b58600/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:257 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ff2fe1000ac68ef028cd1879d1c2cf197302bdbc2feff6610b1fe10df6c0bb6d/userdata/shm DeviceMajor:0 DeviceMinor:51 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-101 DeviceMajor:0 DeviceMinor:101 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bf7a3329-a04c-4b58-9364-b907c00cbe08/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:215 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/65cff83a-8d8f-4e4f-96ef-99941c29ba53/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:222 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/be2682e4-cb63-4102-a83e-ef28023e273a/volumes/kubernetes.io~projected/kube-api-access-nmztj DeviceMajor:0 DeviceMinor:232 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac/volumes/kubernetes.io~projected/kube-api-access-77sfj DeviceMajor:0 DeviceMinor:234 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/bf7a3329-a04c-4b58-9364-b907c00cbe08/volumes/kubernetes.io~projected/kube-api-access-2plvj DeviceMajor:0 DeviceMinor:237 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/81eefe1b-f683-4740-8fb0-0a5050f9b4a4/volumes/kubernetes.io~projected/kube-api-access-qkkcv DeviceMajor:0 DeviceMinor:249 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-69 DeviceMajor:0 DeviceMinor:69 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/81eefe1b-f683-4740-8fb0-0a5050f9b4a4/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fd3388055ed633bef8e022a8b09742a25d6085b3bb671bd2342375ed6f18da63/userdata/shm DeviceMajor:0 DeviceMinor:278 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-283 DeviceMajor:0 DeviceMinor:283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ca9d4694-8675-47c5-819f-89bba9dcdc0f/volumes/kubernetes.io~projected/kube-api-access-rx9dd DeviceMajor:0 DeviceMinor:220 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd/volumes/kubernetes.io~projected/kube-api-access-nmv75 DeviceMajor:0 DeviceMinor:226 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2268116be19023b1c8385358efae4da2f05525a23575585605fbe5052dde322b/userdata/shm DeviceMajor:0 DeviceMinor:261 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-53 DeviceMajor:0 DeviceMinor:53 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-126 DeviceMajor:0 DeviceMinor:126 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6c56e1ac-8752-4e46-8692-93716087f0e0/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:225 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-272 DeviceMajor:0 DeviceMinor:272 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8dacdedc-c6ad-40d4-afdc-59a31be417fe/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:166 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ad6bcec915e6b33c36d0b67453718438f61e0a96034cf76c3052d8a1d9e3df06/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-267 DeviceMajor:0 DeviceMinor:267 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-58 DeviceMajor:0 DeviceMinor:58 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-76 DeviceMajor:0 DeviceMinor:76 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8e583348603a749d4e556bba036d041822b1ae41ee887fca821dc33edae65947/userdata/shm DeviceMajor:0 DeviceMinor:250 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:2268116be19023b MacAddress:16:2e:14:59:a1:4d Speed:10000 Mtu:8900} {Name:26feed0c101f6d4 MacAddress:96:9d:7d:cf:e3:2c Speed:10000 Mtu:8900} {Name:3827efb6815dbb1 MacAddress:4e:a7:48:82:3d:d1 Speed:10000 Mtu:8900} {Name:4a9c798432c4910 MacAddress:f6:1e:da:31:0c:56 Speed:10000 Mtu:8900} {Name:57683f550936db1 MacAddress:ba:f5:b3:5b:0c:1b Speed:10000 Mtu:8900} {Name:8635320a4b36d9f MacAddress:82:30:49:f1:5c:ed Speed:10000 Mtu:8900} {Name:8e583348603a749 MacAddress:3e:04:73:45:21:42 Speed:10000 Mtu:8900} {Name:b988232227aa085 MacAddress:56:53:9f:13:d4:7c Speed:10000 Mtu:8900} {Name:ba34b3933aeb088 MacAddress:3a:1f:aa:c9:6d:96 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:76:3c:3d:4a:0d:9f Speed:0 Mtu:8900} {Name:c445746454631d8 MacAddress:d2:96:72:89:60:d8 Speed:10000 Mtu:8900} {Name:d6ff7b83413c434 MacAddress:2a:84:4d:06:a3:82 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:21:a5:eb Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:b3:c6:d8 Speed:-1 Mtu:9000} {Name:f81c411903140f1 MacAddress:ba:d5:8a:b7:4d:67 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:1a:32:43:41:d1:2f Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 18 08:48:20.530783 master-0 kubenswrapper[6976]: I0318 08:48:20.530764 6976 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 18 08:48:20.531081 master-0 kubenswrapper[6976]: I0318 08:48:20.531028 6976 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 18 08:48:20.531763 master-0 kubenswrapper[6976]: I0318 08:48:20.531725 6976 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 18 08:48:20.532128 master-0 kubenswrapper[6976]: I0318 08:48:20.532038 6976 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 18 08:48:20.532485 master-0 kubenswrapper[6976]: I0318 08:48:20.532125 6976 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 18 08:48:20.532620 master-0 kubenswrapper[6976]: I0318 08:48:20.532587 6976 topology_manager.go:138] "Creating topology manager with none policy" Mar 18 08:48:20.532740 master-0 kubenswrapper[6976]: I0318 08:48:20.532626 6976 container_manager_linux.go:303] "Creating device plugin manager" Mar 18 08:48:20.532740 master-0 kubenswrapper[6976]: I0318 08:48:20.532736 6976 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 08:48:20.532815 master-0 kubenswrapper[6976]: I0318 08:48:20.532760 6976 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 08:48:20.533007 master-0 kubenswrapper[6976]: I0318 08:48:20.532987 6976 state_mem.go:36] "Initialized new in-memory state store" Mar 18 08:48:20.533132 master-0 kubenswrapper[6976]: I0318 08:48:20.533100 6976 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 18 08:48:20.533312 master-0 kubenswrapper[6976]: I0318 08:48:20.533288 6976 kubelet.go:418] "Attempting to sync node with API server" Mar 18 08:48:20.533312 master-0 kubenswrapper[6976]: I0318 08:48:20.533306 6976 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 18 08:48:20.533394 master-0 kubenswrapper[6976]: I0318 08:48:20.533322 6976 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 18 08:48:20.533394 master-0 kubenswrapper[6976]: I0318 08:48:20.533335 6976 kubelet.go:324] "Adding apiserver pod source" Mar 18 08:48:20.533394 master-0 kubenswrapper[6976]: I0318 08:48:20.533354 6976 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 18 08:48:20.535067 master-0 kubenswrapper[6976]: I0318 08:48:20.535027 6976 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 18 08:48:20.535265 master-0 kubenswrapper[6976]: I0318 08:48:20.535237 6976 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 18 08:48:20.535632 master-0 kubenswrapper[6976]: I0318 08:48:20.535602 6976 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 18 08:48:20.535777 master-0 kubenswrapper[6976]: I0318 08:48:20.535753 6976 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 18 08:48:20.535809 master-0 kubenswrapper[6976]: I0318 08:48:20.535778 6976 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 18 08:48:20.535809 master-0 kubenswrapper[6976]: I0318 08:48:20.535788 6976 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 18 08:48:20.535809 master-0 kubenswrapper[6976]: I0318 08:48:20.535796 6976 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 18 08:48:20.535809 master-0 kubenswrapper[6976]: I0318 08:48:20.535803 6976 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 18 08:48:20.535809 master-0 kubenswrapper[6976]: I0318 08:48:20.535811 6976 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 18 08:48:20.535959 master-0 kubenswrapper[6976]: I0318 08:48:20.535819 6976 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 18 08:48:20.535959 master-0 kubenswrapper[6976]: I0318 08:48:20.535827 6976 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 18 08:48:20.535959 master-0 kubenswrapper[6976]: I0318 08:48:20.535838 6976 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 18 08:48:20.535959 master-0 kubenswrapper[6976]: I0318 08:48:20.535846 6976 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 18 08:48:20.535959 master-0 kubenswrapper[6976]: I0318 08:48:20.535873 6976 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 18 08:48:20.535959 master-0 kubenswrapper[6976]: I0318 08:48:20.535888 6976 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 18 08:48:20.535959 master-0 kubenswrapper[6976]: I0318 08:48:20.535921 6976 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 18 08:48:20.536297 master-0 kubenswrapper[6976]: I0318 08:48:20.536270 6976 server.go:1280] "Started kubelet" Mar 18 08:48:20.536768 master-0 kubenswrapper[6976]: I0318 08:48:20.536396 6976 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 18 08:48:20.536768 master-0 kubenswrapper[6976]: I0318 08:48:20.536765 6976 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 18 08:48:20.537223 master-0 kubenswrapper[6976]: I0318 08:48:20.537189 6976 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 18 08:48:20.537353 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 18 08:48:20.545675 master-0 kubenswrapper[6976]: I0318 08:48:20.545057 6976 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 18 08:48:20.552628 master-0 kubenswrapper[6976]: I0318 08:48:20.552252 6976 server.go:449] "Adding debug handlers to kubelet server" Mar 18 08:48:20.552970 master-0 kubenswrapper[6976]: I0318 08:48:20.552933 6976 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 18 08:48:20.553019 master-0 kubenswrapper[6976]: I0318 08:48:20.552981 6976 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 18 08:48:20.553221 master-0 kubenswrapper[6976]: I0318 08:48:20.553138 6976 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 08:48:20.553221 master-0 kubenswrapper[6976]: I0318 08:48:20.553146 6976 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-19 08:38:39 +0000 UTC, rotation deadline is 2026-03-19 01:30:22.396305699 +0000 UTC Mar 18 08:48:20.553221 master-0 kubenswrapper[6976]: I0318 08:48:20.553197 6976 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 16h42m1.843110571s for next certificate rotation Mar 18 08:48:20.553325 master-0 kubenswrapper[6976]: E0318 08:48:20.553233 6976 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 08:48:20.553325 master-0 kubenswrapper[6976]: I0318 08:48:20.553278 6976 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 18 08:48:20.553325 master-0 kubenswrapper[6976]: I0318 08:48:20.553287 6976 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 18 08:48:20.553392 master-0 kubenswrapper[6976]: I0318 08:48:20.553365 6976 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 18 08:48:20.555010 master-0 kubenswrapper[6976]: I0318 08:48:20.554980 6976 factory.go:55] Registering systemd factory Mar 18 08:48:20.555010 master-0 kubenswrapper[6976]: I0318 08:48:20.555002 6976 factory.go:221] Registration of the systemd container factory successfully Mar 18 08:48:20.555093 master-0 kubenswrapper[6976]: I0318 08:48:20.555003 6976 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 08:48:20.556641 master-0 kubenswrapper[6976]: I0318 08:48:20.555713 6976 factory.go:153] Registering CRI-O factory Mar 18 08:48:20.556641 master-0 kubenswrapper[6976]: I0318 08:48:20.555734 6976 factory.go:221] Registration of the crio container factory successfully Mar 18 08:48:20.556641 master-0 kubenswrapper[6976]: I0318 08:48:20.555896 6976 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 18 08:48:20.556641 master-0 kubenswrapper[6976]: I0318 08:48:20.555918 6976 factory.go:103] Registering Raw factory Mar 18 08:48:20.556641 master-0 kubenswrapper[6976]: I0318 08:48:20.555932 6976 manager.go:1196] Started watching for new ooms in manager Mar 18 08:48:20.556641 master-0 kubenswrapper[6976]: I0318 08:48:20.556349 6976 manager.go:319] Starting recovery of all containers Mar 18 08:48:20.556641 master-0 kubenswrapper[6976]: I0318 08:48:20.556551 6976 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.558679 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1df9560e-21f0-44fe-bb51-4bc0fde4a3ac" volumeName="kubernetes.io/configmap/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-config" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.558722 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7cac1300-44c1-4a7d-8d14-efa9702ad9df" volumeName="kubernetes.io/configmap/7cac1300-44c1-4a7d-8d14-efa9702ad9df-env-overrides" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.558738 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e48101ca-f356-45e3-93d7-4e17b8d8066c" volumeName="kubernetes.io/projected/e48101ca-f356-45e3-93d7-4e17b8d8066c-kube-api-access-47cpd" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.558751 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f6833a48-fccb-42bd-ac90-29f08d5bf7e8" volumeName="kubernetes.io/projected/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-kube-api-access-257nx" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.558767 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fdd2f1fd-1a94-4f4e-a275-b075f432f763" volumeName="kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-whereabouts-flatfile-configmap" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.558779 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f9ba06c-7a6b-4f46-a747-80b0a0b58600" volumeName="kubernetes.io/projected/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-kube-api-access" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.558791 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81eefe1b-f683-4740-8fb0-0a5050f9b4a4" volumeName="kubernetes.io/projected/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-kube-api-access-qkkcv" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.558804 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8dacdedc-c6ad-40d4-afdc-59a31be417fe" volumeName="kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-env-overrides" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.558820 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af1fbcf2-d4de-4015-89fc-2565e855a04d" volumeName="kubernetes.io/configmap/af1fbcf2-d4de-4015-89fc-2565e855a04d-cni-binary-copy" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.558831 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf7a3329-a04c-4b58-9364-b907c00cbe08" volumeName="kubernetes.io/configmap/bf7a3329-a04c-4b58-9364-b907c00cbe08-trusted-ca" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.558845 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fdd2f1fd-1a94-4f4e-a275-b075f432f763" volumeName="kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cni-sysctl-allowlist" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.558858 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09269324-c908-474d-818f-5cd49406f1e2" volumeName="kubernetes.io/configmap/09269324-c908-474d-818f-5cd49406f1e2-telemetry-config" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.558871 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09269324-c908-474d-818f-5cd49406f1e2" volumeName="kubernetes.io/projected/09269324-c908-474d-818f-5cd49406f1e2-kube-api-access-94zpt" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.558887 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81eefe1b-f683-4740-8fb0-0a5050f9b4a4" volumeName="kubernetes.io/secret/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-serving-cert" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.558899 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af1fbcf2-d4de-4015-89fc-2565e855a04d" volumeName="kubernetes.io/projected/af1fbcf2-d4de-4015-89fc-2565e855a04d-kube-api-access-r5svd" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.558911 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd" volumeName="kubernetes.io/configmap/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-config" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.558923 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57affd8b-d1ce-40d2-b31e-7b18645ca7b6" volumeName="kubernetes.io/projected/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-kube-api-access-fnzhn" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.558959 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e86268c9-7a83-4ccb-979a-feff00cb4b3e" volumeName="kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-trusted-ca-bundle" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559027 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd" volumeName="kubernetes.io/secret/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-serving-cert" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559041 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb6ef4c4-bff3-4559-8e42-582bbd668b7c" volumeName="kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-ca" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559084 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf7a3329-a04c-4b58-9364-b907c00cbe08" volumeName="kubernetes.io/projected/bf7a3329-a04c-4b58-9364-b907c00cbe08-bound-sa-token" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559100 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e86268c9-7a83-4ccb-979a-feff00cb4b3e" volumeName="kubernetes.io/secret/e86268c9-7a83-4ccb-979a-feff00cb4b3e-serving-cert" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559113 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f9ba06c-7a6b-4f46-a747-80b0a0b58600" volumeName="kubernetes.io/configmap/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-config" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559125 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57affd8b-d1ce-40d2-b31e-7b18645ca7b6" volumeName="kubernetes.io/configmap/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-env-overrides" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559150 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="600c92a1-56c5-497b-a8f0-746830f4180e" volumeName="kubernetes.io/projected/600c92a1-56c5-497b-a8f0-746830f4180e-kube-api-access-m9mh7" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559163 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7cac1300-44c1-4a7d-8d14-efa9702ad9df" volumeName="kubernetes.io/configmap/7cac1300-44c1-4a7d-8d14-efa9702ad9df-ovnkube-config" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559178 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb6ef4c4-bff3-4559-8e42-582bbd668b7c" volumeName="kubernetes.io/projected/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-kube-api-access-f2mj5" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559192 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca9d4694-8675-47c5-819f-89bba9dcdc0f" volumeName="kubernetes.io/configmap/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-trusted-ca" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559207 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="85d361a2-3f83-4857-b96e-3e98fcf33463" volumeName="kubernetes.io/projected/85d361a2-3f83-4857-b96e-3e98fcf33463-kube-api-access" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559219 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8dacdedc-c6ad-40d4-afdc-59a31be417fe" volumeName="kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovnkube-script-lib" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559232 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="95143c61-6f91-4cd4-9411-31c2fb75d4d0" volumeName="kubernetes.io/secret/95143c61-6f91-4cd4-9411-31c2fb75d4d0-serving-cert" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559245 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb6ef4c4-bff3-4559-8e42-582bbd668b7c" volumeName="kubernetes.io/secret/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-serving-cert" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559260 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1deb139f-1903-417e-835c-28abdd156cdb" volumeName="kubernetes.io/configmap/1deb139f-1903-417e-835c-28abdd156cdb-trusted-ca" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559278 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d0da6e3-3887-4361-8eae-e7447f9ff72c" volumeName="kubernetes.io/projected/2d0da6e3-3887-4361-8eae-e7447f9ff72c-kube-api-access-xkw45" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559297 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c56e1ac-8752-4e46-8692-93716087f0e0" volumeName="kubernetes.io/configmap/6c56e1ac-8752-4e46-8692-93716087f0e0-trusted-ca" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559314 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7cac1300-44c1-4a7d-8d14-efa9702ad9df" volumeName="kubernetes.io/projected/7cac1300-44c1-4a7d-8d14-efa9702ad9df-kube-api-access-7dn5k" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559330 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="85d361a2-3f83-4857-b96e-3e98fcf33463" volumeName="kubernetes.io/configmap/85d361a2-3f83-4857-b96e-3e98fcf33463-service-ca" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559347 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="95143c61-6f91-4cd4-9411-31c2fb75d4d0" volumeName="kubernetes.io/empty-dir/95143c61-6f91-4cd4-9411-31c2fb75d4d0-available-featuregates" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559364 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f6a7f55-84bd-4ea5-8248-4cb565904c3b" volumeName="kubernetes.io/configmap/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-config" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559379 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8dacdedc-c6ad-40d4-afdc-59a31be417fe" volumeName="kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovnkube-config" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559395 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb6ef4c4-bff3-4559-8e42-582bbd668b7c" volumeName="kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-service-ca" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559410 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be2682e4-cb63-4102-a83e-ef28023e273a" volumeName="kubernetes.io/projected/be2682e4-cb63-4102-a83e-ef28023e273a-kube-api-access-nmztj" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559422 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e86268c9-7a83-4ccb-979a-feff00cb4b3e" volumeName="kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-service-ca-bundle" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559442 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fdd2f1fd-1a94-4f4e-a275-b075f432f763" volumeName="kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cni-binary-copy" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559454 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1df9560e-21f0-44fe-bb51-4bc0fde4a3ac" volumeName="kubernetes.io/projected/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-kube-api-access" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559468 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7cac1300-44c1-4a7d-8d14-efa9702ad9df" volumeName="kubernetes.io/secret/7cac1300-44c1-4a7d-8d14-efa9702ad9df-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559480 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="95143c61-6f91-4cd4-9411-31c2fb75d4d0" volumeName="kubernetes.io/projected/95143c61-6f91-4cd4-9411-31c2fb75d4d0-kube-api-access-8t9rq" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559492 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca9d4694-8675-47c5-819f-89bba9dcdc0f" volumeName="kubernetes.io/projected/ca9d4694-8675-47c5-819f-89bba9dcdc0f-kube-api-access-rx9dd" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559504 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e86268c9-7a83-4ccb-979a-feff00cb4b3e" volumeName="kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-config" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559516 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e86268c9-7a83-4ccb-979a-feff00cb4b3e" volumeName="kubernetes.io/projected/e86268c9-7a83-4ccb-979a-feff00cb4b3e-kube-api-access-ptdsp" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559529 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5f827195-f68d-4bd2-865b-a1f041a5c73e" volumeName="kubernetes.io/projected/5f827195-f68d-4bd2-865b-a1f041a5c73e-kube-api-access-jndvw" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559541 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65cff83a-8d8f-4e4f-96ef-99941c29ba53" volumeName="kubernetes.io/secret/65cff83a-8d8f-4e4f-96ef-99941c29ba53-serving-cert" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559557 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c56e1ac-8752-4e46-8692-93716087f0e0" volumeName="kubernetes.io/projected/6c56e1ac-8752-4e46-8692-93716087f0e0-kube-api-access-rppm6" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559610 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81eefe1b-f683-4740-8fb0-0a5050f9b4a4" volumeName="kubernetes.io/configmap/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-config" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559625 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b779ce3-07c4-45ca-b1ca-750c95ed3d0b" volumeName="kubernetes.io/projected/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-kube-api-access-wxgx6" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559640 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5c995cf-40a0-4cd6-87fa-96a522f7bc57" volumeName="kubernetes.io/projected/c5c995cf-40a0-4cd6-87fa-96a522f7bc57-kube-api-access-rm2rc" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559652 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f9ba06c-7a6b-4f46-a747-80b0a0b58600" volumeName="kubernetes.io/secret/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-serving-cert" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559665 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65cff83a-8d8f-4e4f-96ef-99941c29ba53" volumeName="kubernetes.io/projected/65cff83a-8d8f-4e4f-96ef-99941c29ba53-kube-api-access" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559677 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8dacdedc-c6ad-40d4-afdc-59a31be417fe" volumeName="kubernetes.io/projected/8dacdedc-c6ad-40d4-afdc-59a31be417fe-kube-api-access-g97kq" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559688 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb6ef4c4-bff3-4559-8e42-582bbd668b7c" volumeName="kubernetes.io/secret/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-client" seLinuxMountContext="" Mar 18 08:48:20.559819 master-0 kubenswrapper[6976]: I0318 08:48:20.559703 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf7a3329-a04c-4b58-9364-b907c00cbe08" volumeName="kubernetes.io/projected/bf7a3329-a04c-4b58-9364-b907c00cbe08-kube-api-access-2plvj" seLinuxMountContext="" Mar 18 08:48:20.562226 master-0 kubenswrapper[6976]: I0318 08:48:20.560178 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f6a7f55-84bd-4ea5-8248-4cb565904c3b" volumeName="kubernetes.io/projected/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-kube-api-access-lnfwv" seLinuxMountContext="" Mar 18 08:48:20.562226 master-0 kubenswrapper[6976]: I0318 08:48:20.560202 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65cff83a-8d8f-4e4f-96ef-99941c29ba53" volumeName="kubernetes.io/configmap/65cff83a-8d8f-4e4f-96ef-99941c29ba53-config" seLinuxMountContext="" Mar 18 08:48:20.562226 master-0 kubenswrapper[6976]: I0318 08:48:20.560240 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b779ce3-07c4-45ca-b1ca-750c95ed3d0b" volumeName="kubernetes.io/secret/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-metrics-tls" seLinuxMountContext="" Mar 18 08:48:20.562226 master-0 kubenswrapper[6976]: I0318 08:48:20.560262 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8dacdedc-c6ad-40d4-afdc-59a31be417fe" volumeName="kubernetes.io/secret/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovn-node-metrics-cert" seLinuxMountContext="" Mar 18 08:48:20.562226 master-0 kubenswrapper[6976]: I0318 08:48:20.560278 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af1fbcf2-d4de-4015-89fc-2565e855a04d" volumeName="kubernetes.io/configmap/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-daemon-config" seLinuxMountContext="" Mar 18 08:48:20.562226 master-0 kubenswrapper[6976]: I0318 08:48:20.560318 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd" volumeName="kubernetes.io/projected/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-kube-api-access-nmv75" seLinuxMountContext="" Mar 18 08:48:20.562226 master-0 kubenswrapper[6976]: I0318 08:48:20.560332 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be2682e4-cb63-4102-a83e-ef28023e273a" volumeName="kubernetes.io/secret/be2682e4-cb63-4102-a83e-ef28023e273a-serving-cert" seLinuxMountContext="" Mar 18 08:48:20.562226 master-0 kubenswrapper[6976]: I0318 08:48:20.560348 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c00ee838-424f-482b-942f-08f0952a5ccd" volumeName="kubernetes.io/projected/c00ee838-424f-482b-942f-08f0952a5ccd-kube-api-access-9w4w9" seLinuxMountContext="" Mar 18 08:48:20.562226 master-0 kubenswrapper[6976]: I0318 08:48:20.560366 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f6a7f55-84bd-4ea5-8248-4cb565904c3b" volumeName="kubernetes.io/secret/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-serving-cert" seLinuxMountContext="" Mar 18 08:48:20.562226 master-0 kubenswrapper[6976]: I0318 08:48:20.560406 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57affd8b-d1ce-40d2-b31e-7b18645ca7b6" volumeName="kubernetes.io/configmap/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-ovnkube-identity-cm" seLinuxMountContext="" Mar 18 08:48:20.562226 master-0 kubenswrapper[6976]: I0318 08:48:20.560436 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5f827195-f68d-4bd2-865b-a1f041a5c73e" volumeName="kubernetes.io/empty-dir/5f827195-f68d-4bd2-865b-a1f041a5c73e-operand-assets" seLinuxMountContext="" Mar 18 08:48:20.562226 master-0 kubenswrapper[6976]: I0318 08:48:20.560480 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5f827195-f68d-4bd2-865b-a1f041a5c73e" volumeName="kubernetes.io/secret/5f827195-f68d-4bd2-865b-a1f041a5c73e-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 18 08:48:20.562226 master-0 kubenswrapper[6976]: I0318 08:48:20.560500 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="600c92a1-56c5-497b-a8f0-746830f4180e" volumeName="kubernetes.io/configmap/600c92a1-56c5-497b-a8f0-746830f4180e-iptables-alerter-script" seLinuxMountContext="" Mar 18 08:48:20.562226 master-0 kubenswrapper[6976]: I0318 08:48:20.560524 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac" volumeName="kubernetes.io/projected/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-kube-api-access-77sfj" seLinuxMountContext="" Mar 18 08:48:20.562226 master-0 kubenswrapper[6976]: I0318 08:48:20.562042 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1deb139f-1903-417e-835c-28abdd156cdb" volumeName="kubernetes.io/projected/1deb139f-1903-417e-835c-28abdd156cdb-kube-api-access-dkmb4" seLinuxMountContext="" Mar 18 08:48:20.562226 master-0 kubenswrapper[6976]: I0318 08:48:20.562128 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1df9560e-21f0-44fe-bb51-4bc0fde4a3ac" volumeName="kubernetes.io/secret/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-serving-cert" seLinuxMountContext="" Mar 18 08:48:20.562226 master-0 kubenswrapper[6976]: I0318 08:48:20.562147 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4192ea44-a38c-4b70-93c3-8070da2ffe2f" volumeName="kubernetes.io/projected/4192ea44-a38c-4b70-93c3-8070da2ffe2f-kube-api-access-gp84d" seLinuxMountContext="" Mar 18 08:48:20.562226 master-0 kubenswrapper[6976]: I0318 08:48:20.562159 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c56e1ac-8752-4e46-8692-93716087f0e0" volumeName="kubernetes.io/projected/6c56e1ac-8752-4e46-8692-93716087f0e0-bound-sa-token" seLinuxMountContext="" Mar 18 08:48:20.562226 master-0 kubenswrapper[6976]: I0318 08:48:20.562197 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57affd8b-d1ce-40d2-b31e-7b18645ca7b6" volumeName="kubernetes.io/secret/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-webhook-cert" seLinuxMountContext="" Mar 18 08:48:20.562226 master-0 kubenswrapper[6976]: I0318 08:48:20.562211 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb6ef4c4-bff3-4559-8e42-582bbd668b7c" volumeName="kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-config" seLinuxMountContext="" Mar 18 08:48:20.562226 master-0 kubenswrapper[6976]: I0318 08:48:20.562228 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be2682e4-cb63-4102-a83e-ef28023e273a" volumeName="kubernetes.io/configmap/be2682e4-cb63-4102-a83e-ef28023e273a-config" seLinuxMountContext="" Mar 18 08:48:20.562226 master-0 kubenswrapper[6976]: I0318 08:48:20.562239 6976 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fdd2f1fd-1a94-4f4e-a275-b075f432f763" volumeName="kubernetes.io/projected/fdd2f1fd-1a94-4f4e-a275-b075f432f763-kube-api-access-fqfdm" seLinuxMountContext="" Mar 18 08:48:20.562226 master-0 kubenswrapper[6976]: I0318 08:48:20.562249 6976 reconstruct.go:97] "Volume reconstruction finished" Mar 18 08:48:20.562874 master-0 kubenswrapper[6976]: I0318 08:48:20.562274 6976 reconciler.go:26] "Reconciler: start to sync state" Mar 18 08:48:20.595506 master-0 kubenswrapper[6976]: I0318 08:48:20.595452 6976 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 18 08:48:20.597133 master-0 kubenswrapper[6976]: I0318 08:48:20.597107 6976 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 18 08:48:20.597188 master-0 kubenswrapper[6976]: I0318 08:48:20.597148 6976 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 18 08:48:20.597188 master-0 kubenswrapper[6976]: I0318 08:48:20.597173 6976 kubelet.go:2335] "Starting kubelet main sync loop" Mar 18 08:48:20.597821 master-0 kubenswrapper[6976]: E0318 08:48:20.597217 6976 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 18 08:48:20.598845 master-0 kubenswrapper[6976]: I0318 08:48:20.598820 6976 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 08:48:20.602531 master-0 kubenswrapper[6976]: I0318 08:48:20.602480 6976 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="3723d82df6a282e88b524b3a08afe8873f1f72923890a0d6f5612d293d44a84b" exitCode=1 Mar 18 08:48:20.604257 master-0 kubenswrapper[6976]: I0318 08:48:20.604140 6976 generic.go:334] "Generic (PLEG): container finished" podID="95143c61-6f91-4cd4-9411-31c2fb75d4d0" containerID="cca28a804f84553b8b1a53af19f79b42304859cf6bff54e57401c4419c4a7e40" exitCode=0 Mar 18 08:48:20.620237 master-0 kubenswrapper[6976]: I0318 08:48:20.620178 6976 generic.go:334] "Generic (PLEG): container finished" podID="fdd2f1fd-1a94-4f4e-a275-b075f432f763" containerID="8bbcbb7729919ddcb0aaf177e6b7da70bdb956a0c249d6fd8ccdc6cd23b74071" exitCode=0 Mar 18 08:48:20.620237 master-0 kubenswrapper[6976]: I0318 08:48:20.620232 6976 generic.go:334] "Generic (PLEG): container finished" podID="fdd2f1fd-1a94-4f4e-a275-b075f432f763" containerID="332c9bf8c34c932234aed0104fb033cece220b16a730251a8ed2dddb4807fbb9" exitCode=0 Mar 18 08:48:20.620378 master-0 kubenswrapper[6976]: I0318 08:48:20.620241 6976 generic.go:334] "Generic (PLEG): container finished" podID="fdd2f1fd-1a94-4f4e-a275-b075f432f763" containerID="29cb6a70b4f03bbaa88bb2a9cd200f77d44062bf7d6a056e592a38539d450a65" exitCode=0 Mar 18 08:48:20.620459 master-0 kubenswrapper[6976]: I0318 08:48:20.620421 6976 generic.go:334] "Generic (PLEG): container finished" podID="fdd2f1fd-1a94-4f4e-a275-b075f432f763" containerID="18607609fc2c048f02839d5d864c5753901b636e45e41dd655403f7b6b802044" exitCode=0 Mar 18 08:48:20.620510 master-0 kubenswrapper[6976]: I0318 08:48:20.620471 6976 generic.go:334] "Generic (PLEG): container finished" podID="fdd2f1fd-1a94-4f4e-a275-b075f432f763" containerID="8e721654d7a6dd53ba602bb38e73e10bda4fb74bd83575e72d850a92e1f3620b" exitCode=0 Mar 18 08:48:20.620510 master-0 kubenswrapper[6976]: I0318 08:48:20.620482 6976 generic.go:334] "Generic (PLEG): container finished" podID="fdd2f1fd-1a94-4f4e-a275-b075f432f763" containerID="4d7c904f1acd55b9d920d547c73d752e1d361d2495697dc27fa3307ea6bf7119" exitCode=0 Mar 18 08:48:20.623479 master-0 kubenswrapper[6976]: I0318 08:48:20.623446 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 08:48:20.624161 master-0 kubenswrapper[6976]: I0318 08:48:20.624131 6976 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="99b2a12b7f88eda209977be842c5d486304d7932fa91b2448c0ff3f2bd17f526" exitCode=1 Mar 18 08:48:20.624161 master-0 kubenswrapper[6976]: I0318 08:48:20.624156 6976 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="d5fdea15855020c7a6ace295d323d168cc8f0fab3f1b0678b2b4dd54d4267ce4" exitCode=0 Mar 18 08:48:20.627458 master-0 kubenswrapper[6976]: I0318 08:48:20.627431 6976 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 18 08:48:20.630867 master-0 kubenswrapper[6976]: I0318 08:48:20.630820 6976 generic.go:334] "Generic (PLEG): container finished" podID="8dacdedc-c6ad-40d4-afdc-59a31be417fe" containerID="ef703157d612ad5a33aedc987f4c2c3909390ffd8d83083c1d4a577646a22e59" exitCode=0 Mar 18 08:48:20.633453 master-0 kubenswrapper[6976]: I0318 08:48:20.633430 6976 generic.go:334] "Generic (PLEG): container finished" podID="3eeb8b56-2c99-4cac-8b32-dd51c94e53ba" containerID="6af9b3db51dc2800e23bac1d32175e8ad4a26ab1ee574f2d956ea30888e63922" exitCode=0 Mar 18 08:48:20.637409 master-0 kubenswrapper[6976]: I0318 08:48:20.637388 6976 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="c0902a4169e07c094c9a3b99e9ad46a44edb13e670f8fb3c264aac643fba743d" exitCode=0 Mar 18 08:48:20.649105 master-0 kubenswrapper[6976]: I0318 08:48:20.649056 6976 generic.go:334] "Generic (PLEG): container finished" podID="0c9de07b-1ef1-4228-b310-1007d999dc7b" containerID="19dda705eb005970ec7faa939c9f315d05d7277d2869c2b15c7b89d228425457" exitCode=0 Mar 18 08:48:20.679518 master-0 kubenswrapper[6976]: I0318 08:48:20.679487 6976 manager.go:324] Recovery completed Mar 18 08:48:20.697508 master-0 kubenswrapper[6976]: E0318 08:48:20.697484 6976 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 08:48:20.706639 master-0 kubenswrapper[6976]: I0318 08:48:20.706614 6976 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 18 08:48:20.706639 master-0 kubenswrapper[6976]: I0318 08:48:20.706637 6976 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 18 08:48:20.706749 master-0 kubenswrapper[6976]: I0318 08:48:20.706658 6976 state_mem.go:36] "Initialized new in-memory state store" Mar 18 08:48:20.706980 master-0 kubenswrapper[6976]: I0318 08:48:20.706959 6976 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 18 08:48:20.707017 master-0 kubenswrapper[6976]: I0318 08:48:20.706978 6976 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 18 08:48:20.707017 master-0 kubenswrapper[6976]: I0318 08:48:20.707002 6976 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 18 08:48:20.707017 master-0 kubenswrapper[6976]: I0318 08:48:20.707011 6976 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 18 08:48:20.707017 master-0 kubenswrapper[6976]: I0318 08:48:20.707020 6976 policy_none.go:49] "None policy: Start" Mar 18 08:48:20.708162 master-0 kubenswrapper[6976]: I0318 08:48:20.708143 6976 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 18 08:48:20.708232 master-0 kubenswrapper[6976]: I0318 08:48:20.708170 6976 state_mem.go:35] "Initializing new in-memory state store" Mar 18 08:48:20.708383 master-0 kubenswrapper[6976]: I0318 08:48:20.708368 6976 state_mem.go:75] "Updated machine memory state" Mar 18 08:48:20.708383 master-0 kubenswrapper[6976]: I0318 08:48:20.708381 6976 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 18 08:48:20.716249 master-0 kubenswrapper[6976]: I0318 08:48:20.716217 6976 manager.go:334] "Starting Device Plugin manager" Mar 18 08:48:20.716249 master-0 kubenswrapper[6976]: I0318 08:48:20.716250 6976 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 18 08:48:20.716400 master-0 kubenswrapper[6976]: I0318 08:48:20.716264 6976 server.go:79] "Starting device plugin registration server" Mar 18 08:48:20.716835 master-0 kubenswrapper[6976]: I0318 08:48:20.716814 6976 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 18 08:48:20.716948 master-0 kubenswrapper[6976]: I0318 08:48:20.716833 6976 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 18 08:48:20.717060 master-0 kubenswrapper[6976]: I0318 08:48:20.717033 6976 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 18 08:48:20.717238 master-0 kubenswrapper[6976]: I0318 08:48:20.717185 6976 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 18 08:48:20.717238 master-0 kubenswrapper[6976]: I0318 08:48:20.717236 6976 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 18 08:48:20.817760 master-0 kubenswrapper[6976]: I0318 08:48:20.817633 6976 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 08:48:20.819185 master-0 kubenswrapper[6976]: I0318 08:48:20.819153 6976 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 08:48:20.819252 master-0 kubenswrapper[6976]: I0318 08:48:20.819233 6976 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 08:48:20.819252 master-0 kubenswrapper[6976]: I0318 08:48:20.819251 6976 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 08:48:20.819369 master-0 kubenswrapper[6976]: I0318 08:48:20.819333 6976 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 08:48:20.898661 master-0 kubenswrapper[6976]: I0318 08:48:20.897990 6976 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 08:48:20.898661 master-0 kubenswrapper[6976]: I0318 08:48:20.898616 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"6ee1233c71c5af7063fbaa44082f3735eb10f8e872e4942be3116550d1869f80"} Mar 18 08:48:20.898661 master-0 kubenswrapper[6976]: I0318 08:48:20.898681 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"3723d82df6a282e88b524b3a08afe8873f1f72923890a0d6f5612d293d44a84b"} Mar 18 08:48:20.898661 master-0 kubenswrapper[6976]: I0318 08:48:20.898702 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"a31ec04000ce15454ff93da1265406bc1c4ac93430867ce448a6dc1acd1c1097"} Mar 18 08:48:20.898997 master-0 kubenswrapper[6976]: I0318 08:48:20.898745 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"e0b37287226cec590faa4200c15d2fef886c4879e12913c9f633d02f362fc880"} Mar 18 08:48:20.898997 master-0 kubenswrapper[6976]: I0318 08:48:20.898764 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"99b2a12b7f88eda209977be842c5d486304d7932fa91b2448c0ff3f2bd17f526"} Mar 18 08:48:20.898997 master-0 kubenswrapper[6976]: I0318 08:48:20.898787 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"d5fdea15855020c7a6ace295d323d168cc8f0fab3f1b0678b2b4dd54d4267ce4"} Mar 18 08:48:20.898997 master-0 kubenswrapper[6976]: I0318 08:48:20.898811 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"b949a573f922a32d775357bfcb732ec6a7748990727195e8e189e898f3802768"} Mar 18 08:48:20.899134 master-0 kubenswrapper[6976]: I0318 08:48:20.899024 6976 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6f8da5e2e3cca080f8e7ee476951ce9423039dd275ca18645fe053e445bb1fd" Mar 18 08:48:20.899134 master-0 kubenswrapper[6976]: I0318 08:48:20.899092 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"5a3bd52bc46563d9e0f440951b976daa40dee6ea05c0ee56171ddc976c094e95"} Mar 18 08:48:20.899193 master-0 kubenswrapper[6976]: I0318 08:48:20.899139 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"e66d51cf8147f2ef1dd8f8cd73d79140962d6bcce6a8aaa4c5456711dcd4f71a"} Mar 18 08:48:20.899193 master-0 kubenswrapper[6976]: I0318 08:48:20.899157 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerDied","Data":"c0902a4169e07c094c9a3b99e9ad46a44edb13e670f8fb3c264aac643fba743d"} Mar 18 08:48:20.899193 master-0 kubenswrapper[6976]: I0318 08:48:20.899176 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"e9783b2b210b8a83d070b181fdbb2da8ba234da764756739e3354c4aa2f2e32b"} Mar 18 08:48:20.899271 master-0 kubenswrapper[6976]: I0318 08:48:20.899194 6976 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9af47a1fce5f49f05d98ded301fb823e1f5cbb6403282d7c4e47623e10192f4e" Mar 18 08:48:20.899271 master-0 kubenswrapper[6976]: I0318 08:48:20.899215 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"0e74fe65579e23426bc0e51944122434e2b88b2a4dcfe52117fc70980e194f0d"} Mar 18 08:48:20.899271 master-0 kubenswrapper[6976]: I0318 08:48:20.899231 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"ff2fe1000ac68ef028cd1879d1c2cf197302bdbc2feff6610b1fe10df6c0bb6d"} Mar 18 08:48:20.899271 master-0 kubenswrapper[6976]: I0318 08:48:20.899246 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"e2df9cabd8b88e525a338baa668bc5602df79a0ef876cd2f299ee939dbfff1e9"} Mar 18 08:48:20.899361 master-0 kubenswrapper[6976]: I0318 08:48:20.899273 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"0d687a1cc19edef86e363b078065411464f70598e58ee47fda14e020c46c6047"} Mar 18 08:48:20.899361 master-0 kubenswrapper[6976]: I0318 08:48:20.899289 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"9eeddb357e31077214bc6fef49178f88b5d294912702d649ea4a30b26a11e0ed"} Mar 18 08:48:20.899361 master-0 kubenswrapper[6976]: I0318 08:48:20.899326 6976 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c615e349deeb331df9e16cf8bf4c467f24a3403e12d87a9f1138c30e06a4c9d2" Mar 18 08:48:21.030933 master-0 kubenswrapper[6976]: I0318 08:48:21.030696 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:21.030933 master-0 kubenswrapper[6976]: I0318 08:48:21.030832 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:48:21.030933 master-0 kubenswrapper[6976]: I0318 08:48:21.030892 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:21.030933 master-0 kubenswrapper[6976]: I0318 08:48:21.030945 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:48:21.031362 master-0 kubenswrapper[6976]: I0318 08:48:21.030993 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:21.031362 master-0 kubenswrapper[6976]: I0318 08:48:21.031064 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:21.031362 master-0 kubenswrapper[6976]: I0318 08:48:21.031110 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:21.031362 master-0 kubenswrapper[6976]: I0318 08:48:21.031185 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:48:21.031362 master-0 kubenswrapper[6976]: I0318 08:48:21.031273 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:21.031362 master-0 kubenswrapper[6976]: I0318 08:48:21.031320 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:21.031800 master-0 kubenswrapper[6976]: I0318 08:48:21.031371 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:21.031800 master-0 kubenswrapper[6976]: I0318 08:48:21.031421 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:21.031800 master-0 kubenswrapper[6976]: I0318 08:48:21.031465 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:21.031800 master-0 kubenswrapper[6976]: I0318 08:48:21.031508 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:48:21.031800 master-0 kubenswrapper[6976]: I0318 08:48:21.031547 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:48:21.031800 master-0 kubenswrapper[6976]: I0318 08:48:21.031625 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:48:21.031800 master-0 kubenswrapper[6976]: I0318 08:48:21.031669 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:21.077951 master-0 kubenswrapper[6976]: E0318 08:48:21.077840 6976 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:21.077951 master-0 kubenswrapper[6976]: E0318 08:48:21.077847 6976 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:48:21.078133 master-0 kubenswrapper[6976]: E0318 08:48:21.077847 6976 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:21.078532 master-0 kubenswrapper[6976]: W0318 08:48:21.078485 6976 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 18 08:48:21.078653 master-0 kubenswrapper[6976]: E0318 08:48:21.078615 6976 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:48:21.081028 master-0 kubenswrapper[6976]: I0318 08:48:21.080230 6976 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 18 08:48:21.081028 master-0 kubenswrapper[6976]: I0318 08:48:21.080299 6976 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 18 08:48:21.132646 master-0 kubenswrapper[6976]: I0318 08:48:21.132557 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:21.132806 master-0 kubenswrapper[6976]: I0318 08:48:21.132701 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:21.132806 master-0 kubenswrapper[6976]: I0318 08:48:21.132798 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:21.132905 master-0 kubenswrapper[6976]: I0318 08:48:21.132819 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:21.132905 master-0 kubenswrapper[6976]: I0318 08:48:21.132835 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:21.132905 master-0 kubenswrapper[6976]: I0318 08:48:21.132849 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:21.132905 master-0 kubenswrapper[6976]: I0318 08:48:21.132867 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:21.132905 master-0 kubenswrapper[6976]: I0318 08:48:21.132885 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:48:21.132905 master-0 kubenswrapper[6976]: I0318 08:48:21.132903 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:48:21.133154 master-0 kubenswrapper[6976]: I0318 08:48:21.132922 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:48:21.133154 master-0 kubenswrapper[6976]: I0318 08:48:21.132941 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:21.133154 master-0 kubenswrapper[6976]: I0318 08:48:21.132987 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:48:21.133154 master-0 kubenswrapper[6976]: I0318 08:48:21.132996 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:21.133154 master-0 kubenswrapper[6976]: I0318 08:48:21.133012 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:21.133154 master-0 kubenswrapper[6976]: I0318 08:48:21.133035 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:48:21.133154 master-0 kubenswrapper[6976]: I0318 08:48:21.133055 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:48:21.133154 master-0 kubenswrapper[6976]: I0318 08:48:21.133058 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:21.133154 master-0 kubenswrapper[6976]: I0318 08:48:21.133071 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:48:21.133154 master-0 kubenswrapper[6976]: I0318 08:48:21.133095 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:21.133154 master-0 kubenswrapper[6976]: I0318 08:48:21.133123 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:21.133154 master-0 kubenswrapper[6976]: I0318 08:48:21.133137 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:21.133154 master-0 kubenswrapper[6976]: I0318 08:48:21.133157 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:21.133154 master-0 kubenswrapper[6976]: I0318 08:48:21.133167 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:48:21.133728 master-0 kubenswrapper[6976]: I0318 08:48:21.133210 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:21.133728 master-0 kubenswrapper[6976]: I0318 08:48:21.133212 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:48:21.133728 master-0 kubenswrapper[6976]: I0318 08:48:21.133226 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:21.133728 master-0 kubenswrapper[6976]: I0318 08:48:21.133240 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:21.133728 master-0 kubenswrapper[6976]: I0318 08:48:21.133282 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:21.133728 master-0 kubenswrapper[6976]: I0318 08:48:21.133313 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:21.133728 master-0 kubenswrapper[6976]: I0318 08:48:21.133340 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:48:21.133728 master-0 kubenswrapper[6976]: I0318 08:48:21.133381 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:48:21.133728 master-0 kubenswrapper[6976]: I0318 08:48:21.133410 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:21.133728 master-0 kubenswrapper[6976]: I0318 08:48:21.133439 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:48:21.133728 master-0 kubenswrapper[6976]: I0318 08:48:21.133466 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:21.178322 master-0 kubenswrapper[6976]: E0318 08:48:21.178269 6976 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:48:21.534678 master-0 kubenswrapper[6976]: I0318 08:48:21.534628 6976 apiserver.go:52] "Watching apiserver" Mar 18 08:48:21.547919 master-0 kubenswrapper[6976]: I0318 08:48:21.547883 6976 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 08:48:21.551335 master-0 kubenswrapper[6976]: I0318 08:48:21.550073 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-tjfg6","openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9","openshift-multus/multus-additional-cni-plugins-68tmr","openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx","openshift-network-diagnostics/network-check-target-7r2q2","openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp","openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl","openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf","openshift-marketplace/marketplace-operator-89ccd998f-m862c","openshift-multus/multus-h7vq8","openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr","openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr","openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p","openshift-etcd/etcd-master-0-master-0","openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh","openshift-multus/network-metrics-daemon-2xs9n","openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l","openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr","openshift-network-operator/network-operator-7bd846bfc4-6rtpx","openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27","openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9","openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5","openshift-ovn-kubernetes/ovnkube-node-6ff5l","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl","openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-lhcpp","openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r","openshift-dns-operator/dns-operator-9c5679d8f-2649q","openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq","openshift-network-node-identity/network-node-identity-lf7kq","openshift-network-operator/iptables-alerter-vr4gq"] Mar 18 08:48:21.551335 master-0 kubenswrapper[6976]: I0318 08:48:21.550379 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-tjfg6" Mar 18 08:48:21.551335 master-0 kubenswrapper[6976]: I0318 08:48:21.550412 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:21.551335 master-0 kubenswrapper[6976]: I0318 08:48:21.550426 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:21.551335 master-0 kubenswrapper[6976]: I0318 08:48:21.550718 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:21.553266 master-0 kubenswrapper[6976]: I0318 08:48:21.553229 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:21.553868 master-0 kubenswrapper[6976]: I0318 08:48:21.553389 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:48:21.553868 master-0 kubenswrapper[6976]: I0318 08:48:21.553547 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:48:21.555032 master-0 kubenswrapper[6976]: I0318 08:48:21.554746 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:48:21.555032 master-0 kubenswrapper[6976]: I0318 08:48:21.554926 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:21.555032 master-0 kubenswrapper[6976]: I0318 08:48:21.555001 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:48:21.556575 master-0 kubenswrapper[6976]: I0318 08:48:21.556053 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 08:48:21.557662 master-0 kubenswrapper[6976]: I0318 08:48:21.557265 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:48:21.557889 master-0 kubenswrapper[6976]: I0318 08:48:21.557866 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:48:21.558420 master-0 kubenswrapper[6976]: I0318 08:48:21.558377 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:48:21.561276 master-0 kubenswrapper[6976]: I0318 08:48:21.560950 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 08:48:21.561276 master-0 kubenswrapper[6976]: I0318 08:48:21.561033 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 08:48:21.561276 master-0 kubenswrapper[6976]: I0318 08:48:21.561047 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 08:48:21.561276 master-0 kubenswrapper[6976]: I0318 08:48:21.561095 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 08:48:21.561276 master-0 kubenswrapper[6976]: I0318 08:48:21.560957 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 08:48:21.561276 master-0 kubenswrapper[6976]: I0318 08:48:21.561224 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 08:48:21.561522 master-0 kubenswrapper[6976]: I0318 08:48:21.561404 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 08:48:21.561554 master-0 kubenswrapper[6976]: I0318 08:48:21.561535 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 08:48:21.561929 master-0 kubenswrapper[6976]: I0318 08:48:21.561770 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 08:48:21.561929 master-0 kubenswrapper[6976]: I0318 08:48:21.561783 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 08:48:21.562427 master-0 kubenswrapper[6976]: I0318 08:48:21.562058 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 08:48:21.562427 master-0 kubenswrapper[6976]: I0318 08:48:21.562182 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 08:48:21.562427 master-0 kubenswrapper[6976]: I0318 08:48:21.562250 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 08:48:21.562642 master-0 kubenswrapper[6976]: I0318 08:48:21.562606 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 08:48:21.563169 master-0 kubenswrapper[6976]: I0318 08:48:21.562746 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 08:48:21.563169 master-0 kubenswrapper[6976]: I0318 08:48:21.562780 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 08:48:21.563169 master-0 kubenswrapper[6976]: I0318 08:48:21.562808 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 08:48:21.563169 master-0 kubenswrapper[6976]: I0318 08:48:21.562814 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 08:48:21.563169 master-0 kubenswrapper[6976]: I0318 08:48:21.563143 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 08:48:21.563475 master-0 kubenswrapper[6976]: I0318 08:48:21.563293 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 08:48:21.563475 master-0 kubenswrapper[6976]: I0318 08:48:21.563328 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 08:48:21.563475 master-0 kubenswrapper[6976]: I0318 08:48:21.563353 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 08:48:21.563475 master-0 kubenswrapper[6976]: I0318 08:48:21.563369 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 08:48:21.563475 master-0 kubenswrapper[6976]: I0318 08:48:21.563408 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 08:48:21.563475 master-0 kubenswrapper[6976]: I0318 08:48:21.563308 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 08:48:21.563475 master-0 kubenswrapper[6976]: I0318 08:48:21.563473 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 08:48:21.563653 master-0 kubenswrapper[6976]: I0318 08:48:21.563500 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 08:48:21.563653 master-0 kubenswrapper[6976]: I0318 08:48:21.563474 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 08:48:21.563653 master-0 kubenswrapper[6976]: I0318 08:48:21.563622 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 08:48:21.563723 master-0 kubenswrapper[6976]: I0318 08:48:21.563652 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 08:48:21.563753 master-0 kubenswrapper[6976]: I0318 08:48:21.563734 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 08:48:21.563781 master-0 kubenswrapper[6976]: I0318 08:48:21.563752 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 08:48:21.563878 master-0 kubenswrapper[6976]: I0318 08:48:21.563832 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 08:48:21.563972 master-0 kubenswrapper[6976]: I0318 08:48:21.563943 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 08:48:21.563972 master-0 kubenswrapper[6976]: I0318 08:48:21.563956 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 08:48:21.564047 master-0 kubenswrapper[6976]: I0318 08:48:21.564033 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 08:48:21.564078 master-0 kubenswrapper[6976]: I0318 08:48:21.564050 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 08:48:21.564104 master-0 kubenswrapper[6976]: I0318 08:48:21.564081 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 08:48:21.564129 master-0 kubenswrapper[6976]: I0318 08:48:21.563301 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 08:48:21.564129 master-0 kubenswrapper[6976]: I0318 08:48:21.564120 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 08:48:21.564233 master-0 kubenswrapper[6976]: I0318 08:48:21.564193 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 08:48:21.564233 master-0 kubenswrapper[6976]: I0318 08:48:21.564210 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 08:48:21.564282 master-0 kubenswrapper[6976]: I0318 08:48:21.564214 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 08:48:21.564364 master-0 kubenswrapper[6976]: I0318 08:48:21.564328 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 08:48:21.564400 master-0 kubenswrapper[6976]: I0318 08:48:21.564371 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 08:48:21.564426 master-0 kubenswrapper[6976]: I0318 08:48:21.564334 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 08:48:21.564468 master-0 kubenswrapper[6976]: I0318 08:48:21.564454 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 08:48:21.564646 master-0 kubenswrapper[6976]: I0318 08:48:21.564618 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 08:48:21.564708 master-0 kubenswrapper[6976]: I0318 08:48:21.564445 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 08:48:21.565777 master-0 kubenswrapper[6976]: I0318 08:48:21.565752 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 08:48:21.566043 master-0 kubenswrapper[6976]: I0318 08:48:21.566009 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 08:48:21.566751 master-0 kubenswrapper[6976]: I0318 08:48:21.566724 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 08:48:21.567796 master-0 kubenswrapper[6976]: I0318 08:48:21.567751 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 08:48:21.567892 master-0 kubenswrapper[6976]: I0318 08:48:21.567869 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 08:48:21.568009 master-0 kubenswrapper[6976]: I0318 08:48:21.567984 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 08:48:21.568129 master-0 kubenswrapper[6976]: I0318 08:48:21.568109 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 08:48:21.568162 master-0 kubenswrapper[6976]: I0318 08:48:21.568128 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 08:48:21.568277 master-0 kubenswrapper[6976]: I0318 08:48:21.568261 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 08:48:21.568277 master-0 kubenswrapper[6976]: I0318 08:48:21.568272 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 08:48:21.568403 master-0 kubenswrapper[6976]: I0318 08:48:21.568361 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 08:48:21.568445 master-0 kubenswrapper[6976]: I0318 08:48:21.568420 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 08:48:21.568672 master-0 kubenswrapper[6976]: I0318 08:48:21.568422 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 08:48:21.568753 master-0 kubenswrapper[6976]: I0318 08:48:21.568738 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 08:48:21.568833 master-0 kubenswrapper[6976]: I0318 08:48:21.568817 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 08:48:21.568864 master-0 kubenswrapper[6976]: I0318 08:48:21.568842 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 08:48:21.568925 master-0 kubenswrapper[6976]: I0318 08:48:21.568908 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 08:48:21.569001 master-0 kubenswrapper[6976]: I0318 08:48:21.568988 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 08:48:21.569039 master-0 kubenswrapper[6976]: I0318 08:48:21.569022 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 08:48:21.569226 master-0 kubenswrapper[6976]: I0318 08:48:21.569207 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 08:48:21.569287 master-0 kubenswrapper[6976]: I0318 08:48:21.569214 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 08:48:21.569324 master-0 kubenswrapper[6976]: I0318 08:48:21.569289 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 08:48:21.569402 master-0 kubenswrapper[6976]: I0318 08:48:21.569384 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 08:48:21.569451 master-0 kubenswrapper[6976]: I0318 08:48:21.569438 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 08:48:21.569575 master-0 kubenswrapper[6976]: I0318 08:48:21.569547 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 08:48:21.572267 master-0 kubenswrapper[6976]: I0318 08:48:21.572239 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 08:48:21.572326 master-0 kubenswrapper[6976]: I0318 08:48:21.572269 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 08:48:21.572673 master-0 kubenswrapper[6976]: I0318 08:48:21.572630 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 08:48:21.576092 master-0 kubenswrapper[6976]: I0318 08:48:21.576074 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 08:48:21.576149 master-0 kubenswrapper[6976]: I0318 08:48:21.576079 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 08:48:21.576181 master-0 kubenswrapper[6976]: I0318 08:48:21.576161 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 08:48:21.576351 master-0 kubenswrapper[6976]: I0318 08:48:21.576336 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 08:48:21.576484 master-0 kubenswrapper[6976]: I0318 08:48:21.576384 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 08:48:21.576484 master-0 kubenswrapper[6976]: I0318 08:48:21.576393 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 08:48:21.576657 master-0 kubenswrapper[6976]: I0318 08:48:21.576628 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 08:48:21.576738 master-0 kubenswrapper[6976]: I0318 08:48:21.576723 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 08:48:21.576807 master-0 kubenswrapper[6976]: I0318 08:48:21.576611 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 08:48:21.577402 master-0 kubenswrapper[6976]: I0318 08:48:21.577383 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 08:48:21.577641 master-0 kubenswrapper[6976]: I0318 08:48:21.577623 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 08:48:21.577808 master-0 kubenswrapper[6976]: I0318 08:48:21.577792 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 08:48:21.577960 master-0 kubenswrapper[6976]: I0318 08:48:21.577943 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 08:48:21.578414 master-0 kubenswrapper[6976]: I0318 08:48:21.578389 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 08:48:21.578662 master-0 kubenswrapper[6976]: I0318 08:48:21.578608 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 08:48:21.578713 master-0 kubenswrapper[6976]: I0318 08:48:21.578691 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 08:48:21.578744 master-0 kubenswrapper[6976]: I0318 08:48:21.578717 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 08:48:21.578814 master-0 kubenswrapper[6976]: I0318 08:48:21.578790 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 08:48:21.579233 master-0 kubenswrapper[6976]: I0318 08:48:21.579204 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 08:48:21.580144 master-0 kubenswrapper[6976]: I0318 08:48:21.580126 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 08:48:21.582005 master-0 kubenswrapper[6976]: I0318 08:48:21.581981 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 08:48:21.585940 master-0 kubenswrapper[6976]: I0318 08:48:21.585918 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 08:48:21.590695 master-0 kubenswrapper[6976]: I0318 08:48:21.590662 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 08:48:21.596875 master-0 kubenswrapper[6976]: I0318 08:48:21.596841 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 08:48:21.616808 master-0 kubenswrapper[6976]: I0318 08:48:21.616782 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 08:48:21.636030 master-0 kubenswrapper[6976]: I0318 08:48:21.635997 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkw45\" (UniqueName: \"kubernetes.io/projected/2d0da6e3-3887-4361-8eae-e7447f9ff72c-kube-api-access-xkw45\") pod \"package-server-manager-7b95f86987-k6xp5\" (UID: \"2d0da6e3-3887-4361-8eae-e7447f9ff72c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:48:21.636113 master-0 kubenswrapper[6976]: I0318 08:48:21.636057 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-host-etc-kube\") pod \"network-operator-7bd846bfc4-6rtpx\" (UID: \"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b\") " pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 08:48:21.636113 master-0 kubenswrapper[6976]: I0318 08:48:21.636101 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c56e1ac-8752-4e46-8692-93716087f0e0-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:21.636197 master-0 kubenswrapper[6976]: I0318 08:48:21.636139 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rppm6\" (UniqueName: \"kubernetes.io/projected/6c56e1ac-8752-4e46-8692-93716087f0e0-kube-api-access-rppm6\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:21.636197 master-0 kubenswrapper[6976]: I0318 08:48:21.636173 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be2682e4-cb63-4102-a83e-ef28023e273a-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl\" (UID: \"be2682e4-cb63-4102-a83e-ef28023e273a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" Mar 18 08:48:21.636314 master-0 kubenswrapper[6976]: I0318 08:48:21.636270 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf7a3329-a04c-4b58-9364-b907c00cbe08-bound-sa-token\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:21.636430 master-0 kubenswrapper[6976]: I0318 08:48:21.636407 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-serving-cert\") pod \"service-ca-operator-b865698dc-fhlfx\" (UID: \"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" Mar 18 08:48:21.636465 master-0 kubenswrapper[6976]: I0318 08:48:21.636445 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-config\") pod \"service-ca-operator-b865698dc-fhlfx\" (UID: \"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" Mar 18 08:48:21.636498 master-0 kubenswrapper[6976]: I0318 08:48:21.636482 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w4w9\" (UniqueName: \"kubernetes.io/projected/c00ee838-424f-482b-942f-08f0952a5ccd-kube-api-access-9w4w9\") pod \"olm-operator-5c9796789-twp27\" (UID: \"c00ee838-424f-482b-942f-08f0952a5ccd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:48:21.636525 master-0 kubenswrapper[6976]: I0318 08:48:21.636516 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2plvj\" (UniqueName: \"kubernetes.io/projected/bf7a3329-a04c-4b58-9364-b907c00cbe08-kube-api-access-2plvj\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:21.636585 master-0 kubenswrapper[6976]: I0318 08:48:21.636551 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-k6xp5\" (UID: \"2d0da6e3-3887-4361-8eae-e7447f9ff72c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:48:21.636615 master-0 kubenswrapper[6976]: I0318 08:48:21.636593 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 08:48:21.636615 master-0 kubenswrapper[6976]: I0318 08:48:21.636603 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmztj\" (UniqueName: \"kubernetes.io/projected/be2682e4-cb63-4102-a83e-ef28023e273a-kube-api-access-nmztj\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl\" (UID: \"be2682e4-cb63-4102-a83e-ef28023e273a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" Mar 18 08:48:21.636674 master-0 kubenswrapper[6976]: I0318 08:48:21.636640 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:21.636674 master-0 kubenswrapper[6976]: I0318 08:48:21.636670 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert\") pod \"olm-operator-5c9796789-twp27\" (UID: \"c00ee838-424f-482b-942f-08f0952a5ccd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:48:21.636727 master-0 kubenswrapper[6976]: I0318 08:48:21.636697 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jndvw\" (UniqueName: \"kubernetes.io/projected/5f827195-f68d-4bd2-865b-a1f041a5c73e-kube-api-access-jndvw\") pod \"cluster-olm-operator-67dcd4998-6gj8k\" (UID: \"5f827195-f68d-4bd2-865b-a1f041a5c73e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" Mar 18 08:48:21.636754 master-0 kubenswrapper[6976]: I0318 08:48:21.636724 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf7a3329-a04c-4b58-9364-b907c00cbe08-trusted-ca\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:21.636808 master-0 kubenswrapper[6976]: I0318 08:48:21.636753 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c56e1ac-8752-4e46-8692-93716087f0e0-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:21.636837 master-0 kubenswrapper[6976]: I0318 08:48:21.636812 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-config\") pod \"service-ca-operator-b865698dc-fhlfx\" (UID: \"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" Mar 18 08:48:21.636837 master-0 kubenswrapper[6976]: I0318 08:48:21.636784 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be2682e4-cb63-4102-a83e-ef28023e273a-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl\" (UID: \"be2682e4-cb63-4102-a83e-ef28023e273a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" Mar 18 08:48:21.636887 master-0 kubenswrapper[6976]: I0318 08:48:21.636793 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:21.636957 master-0 kubenswrapper[6976]: I0318 08:48:21.636928 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2682e4-cb63-4102-a83e-ef28023e273a-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl\" (UID: \"be2682e4-cb63-4102-a83e-ef28023e273a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" Mar 18 08:48:21.637029 master-0 kubenswrapper[6976]: I0318 08:48:21.637001 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-serving-cert\") pod \"service-ca-operator-b865698dc-fhlfx\" (UID: \"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" Mar 18 08:48:21.637150 master-0 kubenswrapper[6976]: I0318 08:48:21.637112 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1deb139f-1903-417e-835c-28abdd156cdb-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:21.637233 master-0 kubenswrapper[6976]: I0318 08:48:21.637207 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-config\") pod \"openshift-apiserver-operator-d65958b8-m8p9p\" (UID: \"81eefe1b-f683-4740-8fb0-0a5050f9b4a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" Mar 18 08:48:21.637337 master-0 kubenswrapper[6976]: I0318 08:48:21.637314 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm2rc\" (UniqueName: \"kubernetes.io/projected/c5c995cf-40a0-4cd6-87fa-96a522f7bc57-kube-api-access-rm2rc\") pod \"csi-snapshot-controller-operator-5f5d689c6b-lhcpp\" (UID: \"c5c995cf-40a0-4cd6-87fa-96a522f7bc57\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-lhcpp" Mar 18 08:48:21.637420 master-0 kubenswrapper[6976]: I0318 08:48:21.637351 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2682e4-cb63-4102-a83e-ef28023e273a-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl\" (UID: \"be2682e4-cb63-4102-a83e-ef28023e273a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" Mar 18 08:48:21.637420 master-0 kubenswrapper[6976]: I0318 08:48:21.637412 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-metrics-tls\") pod \"network-operator-7bd846bfc4-6rtpx\" (UID: \"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b\") " pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 08:48:21.637511 master-0 kubenswrapper[6976]: I0318 08:48:21.637479 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c56e1ac-8752-4e46-8692-93716087f0e0-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:21.637545 master-0 kubenswrapper[6976]: I0318 08:48:21.637522 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-config\") pod \"openshift-apiserver-operator-d65958b8-m8p9p\" (UID: \"81eefe1b-f683-4740-8fb0-0a5050f9b4a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" Mar 18 08:48:21.637605 master-0 kubenswrapper[6976]: I0318 08:48:21.637583 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf7a3329-a04c-4b58-9364-b907c00cbe08-trusted-ca\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:21.637690 master-0 kubenswrapper[6976]: I0318 08:48:21.637636 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:21.637723 master-0 kubenswrapper[6976]: I0318 08:48:21.637708 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkmb4\" (UniqueName: \"kubernetes.io/projected/1deb139f-1903-417e-835c-28abdd156cdb-kube-api-access-dkmb4\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:21.637804 master-0 kubenswrapper[6976]: I0318 08:48:21.637786 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-metrics-tls\") pod \"network-operator-7bd846bfc4-6rtpx\" (UID: \"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b\") " pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 08:48:21.637903 master-0 kubenswrapper[6976]: I0318 08:48:21.637884 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:21.638018 master-0 kubenswrapper[6976]: I0318 08:48:21.637986 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1deb139f-1903-417e-835c-28abdd156cdb-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:21.638118 master-0 kubenswrapper[6976]: I0318 08:48:21.638024 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmv75\" (UniqueName: \"kubernetes.io/projected/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-kube-api-access-nmv75\") pod \"service-ca-operator-b865698dc-fhlfx\" (UID: \"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" Mar 18 08:48:21.638118 master-0 kubenswrapper[6976]: I0318 08:48:21.638059 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkkcv\" (UniqueName: \"kubernetes.io/projected/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-kube-api-access-qkkcv\") pod \"openshift-apiserver-operator-d65958b8-m8p9p\" (UID: \"81eefe1b-f683-4740-8fb0-0a5050f9b4a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" Mar 18 08:48:21.638118 master-0 kubenswrapper[6976]: I0318 08:48:21.638088 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/5f827195-f68d-4bd2-865b-a1f041a5c73e-operand-assets\") pod \"cluster-olm-operator-67dcd4998-6gj8k\" (UID: \"5f827195-f68d-4bd2-865b-a1f041a5c73e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" Mar 18 08:48:21.638118 master-0 kubenswrapper[6976]: I0318 08:48:21.638106 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxgx6\" (UniqueName: \"kubernetes.io/projected/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-kube-api-access-wxgx6\") pod \"network-operator-7bd846bfc4-6rtpx\" (UID: \"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b\") " pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 08:48:21.638218 master-0 kubenswrapper[6976]: I0318 08:48:21.638123 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-m8p9p\" (UID: \"81eefe1b-f683-4740-8fb0-0a5050f9b4a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" Mar 18 08:48:21.638218 master-0 kubenswrapper[6976]: I0318 08:48:21.638139 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:21.638218 master-0 kubenswrapper[6976]: I0318 08:48:21.638155 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx9dd\" (UniqueName: \"kubernetes.io/projected/ca9d4694-8675-47c5-819f-89bba9dcdc0f-kube-api-access-rx9dd\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:21.638218 master-0 kubenswrapper[6976]: I0318 08:48:21.638173 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f827195-f68d-4bd2-865b-a1f041a5c73e-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-6gj8k\" (UID: \"5f827195-f68d-4bd2-865b-a1f041a5c73e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" Mar 18 08:48:21.638218 master-0 kubenswrapper[6976]: I0318 08:48:21.638171 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/5f827195-f68d-4bd2-865b-a1f041a5c73e-operand-assets\") pod \"cluster-olm-operator-67dcd4998-6gj8k\" (UID: \"5f827195-f68d-4bd2-865b-a1f041a5c73e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" Mar 18 08:48:21.638218 master-0 kubenswrapper[6976]: I0318 08:48:21.638195 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:21.638360 master-0 kubenswrapper[6976]: I0318 08:48:21.638328 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f827195-f68d-4bd2-865b-a1f041a5c73e-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-6gj8k\" (UID: \"5f827195-f68d-4bd2-865b-a1f041a5c73e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" Mar 18 08:48:21.638447 master-0 kubenswrapper[6976]: I0318 08:48:21.638429 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-m8p9p\" (UID: \"81eefe1b-f683-4740-8fb0-0a5050f9b4a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" Mar 18 08:48:21.638492 master-0 kubenswrapper[6976]: I0318 08:48:21.638462 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:21.655184 master-0 kubenswrapper[6976]: I0318 08:48:21.655159 6976 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 18 08:48:21.656270 master-0 kubenswrapper[6976]: I0318 08:48:21.656242 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 08:48:21.711447 master-0 kubenswrapper[6976]: I0318 08:48:21.711383 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rppm6\" (UniqueName: \"kubernetes.io/projected/6c56e1ac-8752-4e46-8692-93716087f0e0-kube-api-access-rppm6\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:21.732337 master-0 kubenswrapper[6976]: I0318 08:48:21.732242 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkw45\" (UniqueName: \"kubernetes.io/projected/2d0da6e3-3887-4361-8eae-e7447f9ff72c-kube-api-access-xkw45\") pod \"package-server-manager-7b95f86987-k6xp5\" (UID: \"2d0da6e3-3887-4361-8eae-e7447f9ff72c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:48:21.739155 master-0 kubenswrapper[6976]: I0318 08:48:21.739117 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-etc-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.739259 master-0 kubenswrapper[6976]: I0318 08:48:21.739158 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls\") pod \"dns-operator-9c5679d8f-2649q\" (UID: \"4192ea44-a38c-4b70-93c3-8070da2ffe2f\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 08:48:21.739259 master-0 kubenswrapper[6976]: I0318 08:48:21.739197 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-kubelet\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.739259 master-0 kubenswrapper[6976]: I0318 08:48:21.739214 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-var-lib-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.739363 master-0 kubenswrapper[6976]: I0318 08:48:21.739262 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-serving-cert\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:21.739363 master-0 kubenswrapper[6976]: I0318 08:48:21.739280 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovnkube-config\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.739363 master-0 kubenswrapper[6976]: I0318 08:48:21.739295 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:21.739363 master-0 kubenswrapper[6976]: I0318 08:48:21.739312 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:21.739363 master-0 kubenswrapper[6976]: I0318 08:48:21.739346 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-env-overrides\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 08:48:21.739363 master-0 kubenswrapper[6976]: I0318 08:48:21.739361 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-client\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:21.744786 master-0 kubenswrapper[6976]: I0318 08:48:21.739380 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85d361a2-3f83-4857-b96e-3e98fcf33463-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:48:21.744852 master-0 kubenswrapper[6976]: I0318 08:48:21.744798 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7cac1300-44c1-4a7d-8d14-efa9702ad9df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 08:48:21.744852 master-0 kubenswrapper[6976]: I0318 08:48:21.744829 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-system-cni-dir\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:48:21.744903 master-0 kubenswrapper[6976]: I0318 08:48:21.744849 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cni-binary-copy\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:48:21.745411 master-0 kubenswrapper[6976]: I0318 08:48:21.745358 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:21.745546 master-0 kubenswrapper[6976]: I0318 08:48:21.745508 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-env-overrides\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 08:48:21.745679 master-0 kubenswrapper[6976]: I0318 08:48:21.745651 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-serving-cert\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:21.745740 master-0 kubenswrapper[6976]: I0318 08:48:21.745704 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7cac1300-44c1-4a7d-8d14-efa9702ad9df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 08:48:21.745774 master-0 kubenswrapper[6976]: E0318 08:48:21.745746 6976 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:21.745826 master-0 kubenswrapper[6976]: E0318 08:48:21.745813 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert podName:1deb139f-1903-417e-835c-28abdd156cdb nodeName:}" failed. No retries permitted until 2026-03-18 08:48:22.245795861 +0000 UTC m=+1.831397456 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-9s8lp" (UID: "1deb139f-1903-417e-835c-28abdd156cdb") : secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:21.745860 master-0 kubenswrapper[6976]: I0318 08:48:21.745835 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-client\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:21.745977 master-0 kubenswrapper[6976]: I0318 08:48:21.745875 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cni-binary-copy\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:48:21.746136 master-0 kubenswrapper[6976]: I0318 08:48:21.746097 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovnkube-config\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.750482 master-0 kubenswrapper[6976]: I0318 08:48:21.750423 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-tuning-conf-dir\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:48:21.750546 master-0 kubenswrapper[6976]: I0318 08:48:21.750517 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:21.750608 master-0 kubenswrapper[6976]: I0318 08:48:21.750550 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-cnibin\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.750638 master-0 kubenswrapper[6976]: I0318 08:48:21.750626 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-log-socket\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.750692 master-0 kubenswrapper[6976]: I0318 08:48:21.750661 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:21.750884 master-0 kubenswrapper[6976]: I0318 08:48:21.750831 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-cpbdr\" (UID: \"0f9ba06c-7a6b-4f46-a747-80b0a0b58600\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" Mar 18 08:48:21.750966 master-0 kubenswrapper[6976]: I0318 08:48:21.750943 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-system-cni-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.751011 master-0 kubenswrapper[6976]: I0318 08:48:21.750989 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:21.751011 master-0 kubenswrapper[6976]: I0318 08:48:21.751003 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94zpt\" (UniqueName: \"kubernetes.io/projected/09269324-c908-474d-818f-5cd49406f1e2-kube-api-access-94zpt\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:21.751083 master-0 kubenswrapper[6976]: I0318 08:48:21.751060 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-cpbdr\" (UID: \"0f9ba06c-7a6b-4f46-a747-80b0a0b58600\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" Mar 18 08:48:21.751154 master-0 kubenswrapper[6976]: I0318 08:48:21.751133 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:48:21.751295 master-0 kubenswrapper[6976]: I0318 08:48:21.751274 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-socket-dir-parent\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.751356 master-0 kubenswrapper[6976]: I0318 08:48:21.751310 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:48:21.751424 master-0 kubenswrapper[6976]: I0318 08:48:21.751397 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65cff83a-8d8f-4e4f-96ef-99941c29ba53-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-pp4r9\" (UID: \"65cff83a-8d8f-4e4f-96ef-99941c29ba53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" Mar 18 08:48:21.751470 master-0 kubenswrapper[6976]: I0318 08:48:21.751446 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:48:21.751498 master-0 kubenswrapper[6976]: I0318 08:48:21.751466 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9mh7\" (UniqueName: \"kubernetes.io/projected/600c92a1-56c5-497b-a8f0-746830f4180e-kube-api-access-m9mh7\") pod \"iptables-alerter-vr4gq\" (UID: \"600c92a1-56c5-497b-a8f0-746830f4180e\") " pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 08:48:21.751598 master-0 kubenswrapper[6976]: I0318 08:48:21.751553 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:21.751650 master-0 kubenswrapper[6976]: I0318 08:48:21.751629 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5svd\" (UniqueName: \"kubernetes.io/projected/af1fbcf2-d4de-4015-89fc-2565e855a04d-kube-api-access-r5svd\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.751650 master-0 kubenswrapper[6976]: E0318 08:48:21.751643 6976 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:21.751713 master-0 kubenswrapper[6976]: I0318 08:48:21.751670 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:48:21.751713 master-0 kubenswrapper[6976]: E0318 08:48:21.751698 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls podName:bf7a3329-a04c-4b58-9364-b907c00cbe08 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:22.251675664 +0000 UTC m=+1.837277259 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls") pod "ingress-operator-66b84d69b-4cxfh" (UID: "bf7a3329-a04c-4b58-9364-b907c00cbe08") : secret "metrics-tls" not found Mar 18 08:48:21.751713 master-0 kubenswrapper[6976]: I0318 08:48:21.751703 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65cff83a-8d8f-4e4f-96ef-99941c29ba53-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-pp4r9\" (UID: \"65cff83a-8d8f-4e4f-96ef-99941c29ba53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" Mar 18 08:48:21.751794 master-0 kubenswrapper[6976]: I0318 08:48:21.751715 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-cni-multus\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.751794 master-0 kubenswrapper[6976]: I0318 08:48:21.751742 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-hostroot\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.751885 master-0 kubenswrapper[6976]: I0318 08:48:21.751868 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:48:21.752160 master-0 kubenswrapper[6976]: I0318 08:48:21.752135 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65cff83a-8d8f-4e4f-96ef-99941c29ba53-config\") pod \"kube-apiserver-operator-8b68b9d9b-pp4r9\" (UID: \"65cff83a-8d8f-4e4f-96ef-99941c29ba53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" Mar 18 08:48:21.753623 master-0 kubenswrapper[6976]: I0318 08:48:21.753501 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65cff83a-8d8f-4e4f-96ef-99941c29ba53-config\") pod \"kube-apiserver-operator-8b68b9d9b-pp4r9\" (UID: \"65cff83a-8d8f-4e4f-96ef-99941c29ba53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.758108 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-xlfrc\" (UID: \"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.758204 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.758259 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert\") pod \"catalog-operator-68f85b4d6c-hhn7l\" (UID: \"f6833a48-fccb-42bd-ac90-29f08d5bf7e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.758345 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85d361a2-3f83-4857-b96e-3e98fcf33463-kube-api-access\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.758385 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/af1fbcf2-d4de-4015-89fc-2565e855a04d-cni-binary-copy\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.759872 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.760346 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-xlfrc\" (UID: \"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.761301 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w4w9\" (UniqueName: \"kubernetes.io/projected/c00ee838-424f-482b-942f-08f0952a5ccd-kube-api-access-9w4w9\") pod \"olm-operator-5c9796789-twp27\" (UID: \"c00ee838-424f-482b-942f-08f0952a5ccd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.761680 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47cpd\" (UniqueName: \"kubernetes.io/projected/e48101ca-f356-45e3-93d7-4e17b8d8066c-kube-api-access-47cpd\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.761740 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85d361a2-3f83-4857-b96e-3e98fcf33463-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.761791 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-run-netns\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.761828 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-xlfrc\" (UID: \"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.761871 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-run-ovn-kubernetes\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.761918 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-cni-netd\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.761967 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-env-overrides\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.762023 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.762093 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/09269324-c908-474d-818f-5cd49406f1e2-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.762151 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dn5k\" (UniqueName: \"kubernetes.io/projected/7cac1300-44c1-4a7d-8d14-efa9702ad9df-kube-api-access-7dn5k\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.762189 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqfdm\" (UniqueName: \"kubernetes.io/projected/fdd2f1fd-1a94-4f4e-a275-b075f432f763-kube-api-access-fqfdm\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.762249 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85d361a2-3f83-4857-b96e-3e98fcf33463-service-ca\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.762285 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95143c61-6f91-4cd4-9411-31c2fb75d4d0-serving-cert\") pod \"openshift-config-operator-95bf4f4d-whh6r\" (UID: \"95143c61-6f91-4cd4-9411-31c2fb75d4d0\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.762329 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/600c92a1-56c5-497b-a8f0-746830f4180e-iptables-alerter-script\") pod \"iptables-alerter-vr4gq\" (UID: \"600c92a1-56c5-497b-a8f0-746830f4180e\") " pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.762402 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-netns\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.762441 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g97kq\" (UniqueName: \"kubernetes.io/projected/8dacdedc-c6ad-40d4-afdc-59a31be417fe-kube-api-access-g97kq\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.762476 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-etc-kubernetes\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.762513 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-ovnkube-identity-cm\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.762595 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-ca\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:21.763744 master-0 kubenswrapper[6976]: I0318 08:48:21.763193 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/af1fbcf2-d4de-4015-89fc-2565e855a04d-cni-binary-copy\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.764335 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-cpbdr\" (UID: \"0f9ba06c-7a6b-4f46-a747-80b0a0b58600\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.764401 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-host-etc-kube\") pod \"network-operator-7bd846bfc4-6rtpx\" (UID: \"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b\") " pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.764495 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-host-etc-kube\") pod \"network-operator-7bd846bfc4-6rtpx\" (UID: \"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b\") " pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: E0318 08:48:21.764551 6976 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: E0318 08:48:21.764675 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls podName:1deb139f-1903-417e-835c-28abdd156cdb nodeName:}" failed. No retries permitted until 2026-03-18 08:48:22.264648145 +0000 UTC m=+1.850249770 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-9s8lp" (UID: "1deb139f-1903-417e-835c-28abdd156cdb") : secret "node-tuning-operator-tls" not found Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.764970 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-daemon-config\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.765012 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77sfj\" (UniqueName: \"kubernetes.io/projected/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-kube-api-access-77sfj\") pod \"multus-admission-controller-5dbbb8b86f-25rbq\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.765049 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-k6xp5\" (UID: \"2d0da6e3-3887-4361-8eae-e7447f9ff72c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.765086 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert\") pod \"olm-operator-5c9796789-twp27\" (UID: \"c00ee838-424f-482b-942f-08f0952a5ccd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.765117 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovnkube-script-lib\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.765150 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7cac1300-44c1-4a7d-8d14-efa9702ad9df-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.765151 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-env-overrides\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.767231 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-ovnkube-identity-cm\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.765554 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-daemon-config\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: E0318 08:48:21.765774 6976 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: E0318 08:48:21.767355 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert podName:2d0da6e3-3887-4361-8eae-e7447f9ff72c nodeName:}" failed. No retries permitted until 2026-03-18 08:48:22.267330696 +0000 UTC m=+1.852932351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-k6xp5" (UID: "2d0da6e3-3887-4361-8eae-e7447f9ff72c") : secret "package-server-manager-serving-cert" not found Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.766099 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovnkube-script-lib\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.766783 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95143c61-6f91-4cd4-9411-31c2fb75d4d0-serving-cert\") pod \"openshift-config-operator-95bf4f4d-whh6r\" (UID: \"95143c61-6f91-4cd4-9411-31c2fb75d4d0\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.767119 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/600c92a1-56c5-497b-a8f0-746830f4180e-iptables-alerter-script\") pod \"iptables-alerter-vr4gq\" (UID: \"600c92a1-56c5-497b-a8f0-746830f4180e\") " pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: E0318 08:48:21.765826 6976 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: E0318 08:48:21.767462 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert podName:c00ee838-424f-482b-942f-08f0952a5ccd nodeName:}" failed. No retries permitted until 2026-03-18 08:48:22.267452119 +0000 UTC m=+1.853053804 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert") pod "olm-operator-5c9796789-twp27" (UID: "c00ee838-424f-482b-942f-08f0952a5ccd") : secret "olm-operator-serving-cert" not found Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.766385 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7cac1300-44c1-4a7d-8d14-efa9702ad9df-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.765188 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cnibin\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.767490 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85d361a2-3f83-4857-b96e-3e98fcf33463-service-ca\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.767617 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-ca\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.767675 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk4w7\" (UniqueName: \"kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7\") pod \"network-check-target-7r2q2\" (UID: \"f198f770-5483-4499-abb6-06026f2c6b37\") " pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.767714 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.767764 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-cni-bin\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.767791 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-config\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.767818 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-cni-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.767843 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp84d\" (UniqueName: \"kubernetes.io/projected/4192ea44-a38c-4b70-93c3-8070da2ffe2f-kube-api-access-gp84d\") pod \"dns-operator-9c5679d8f-2649q\" (UID: \"4192ea44-a38c-4b70-93c3-8070da2ffe2f\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.767893 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-config\") pod \"openshift-kube-scheduler-operator-dddff6458-cpbdr\" (UID: \"0f9ba06c-7a6b-4f46-a747-80b0a0b58600\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.767920 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-cni-bin\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.767941 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-kubelet\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.767961 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-systemd\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.767983 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-ovn\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.768004 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/600c92a1-56c5-497b-a8f0-746830f4180e-host-slash\") pod \"iptables-alerter-vr4gq\" (UID: \"600c92a1-56c5-497b-a8f0-746830f4180e\") " pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.768029 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.768094 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-config\") pod \"openshift-controller-manager-operator-8c94f4649-2g6x9\" (UID: \"0f6a7f55-84bd-4ea5-8248-4cb565904c3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.768120 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnfwv\" (UniqueName: \"kubernetes.io/projected/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-kube-api-access-lnfwv\") pod \"openshift-controller-manager-operator-8c94f4649-2g6x9\" (UID: \"0f6a7f55-84bd-4ea5-8248-4cb565904c3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.768143 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-slash\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.768167 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-25rbq\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.768195 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-config\") pod \"kube-controller-manager-operator-ff989d6cc-xlfrc\" (UID: \"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.768364 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-config\") pod \"kube-controller-manager-operator-ff989d6cc-xlfrc\" (UID: \"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.768513 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-config\") pod \"openshift-kube-scheduler-operator-dddff6458-cpbdr\" (UID: \"0f9ba06c-7a6b-4f46-a747-80b0a0b58600\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.768722 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-config\") pod \"openshift-controller-manager-operator-8c94f4649-2g6x9\" (UID: \"0f6a7f55-84bd-4ea5-8248-4cb565904c3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: I0318 08:48:21.768937 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/09269324-c908-474d-818f-5cd49406f1e2-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: E0318 08:48:21.769122 6976 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 08:48:21.768639 master-0 kubenswrapper[6976]: E0318 08:48:21.769159 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls podName:6c56e1ac-8752-4e46-8692-93716087f0e0 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:22.269148637 +0000 UTC m=+1.854750282 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-c4lgf" (UID: "6c56e1ac-8752-4e46-8692-93716087f0e0") : secret "image-registry-operator-tls" not found Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.769388 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-k8s-cni-cncf-io\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.769426 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnzhn\" (UniqueName: \"kubernetes.io/projected/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-kube-api-access-fnzhn\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.769464 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-os-release\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.769490 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2mj5\" (UniqueName: \"kubernetes.io/projected/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-kube-api-access-f2mj5\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.769514 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7cac1300-44c1-4a7d-8d14-efa9702ad9df-env-overrides\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.769537 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovn-node-metrics-cert\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.769559 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-config\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.769662 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.769959 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovn-node-metrics-cert\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.769990 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-conf-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.770043 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-webhook-cert\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.770080 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-multus-certs\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.770144 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7cac1300-44c1-4a7d-8d14-efa9702ad9df-env-overrides\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.770200 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-config\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.770327 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-config\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.770152 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e86268c9-7a83-4ccb-979a-feff00cb4b3e-serving-cert\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.770447 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdsp\" (UniqueName: \"kubernetes.io/projected/e86268c9-7a83-4ccb-979a-feff00cb4b3e-kube-api-access-ptdsp\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.770500 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-257nx\" (UniqueName: \"kubernetes.io/projected/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-kube-api-access-257nx\") pod \"catalog-operator-68f85b4d6c-hhn7l\" (UID: \"f6833a48-fccb-42bd-ac90-29f08d5bf7e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.770522 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-webhook-cert\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.770529 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e86268c9-7a83-4ccb-979a-feff00cb4b3e-serving-cert\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.770698 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65cff83a-8d8f-4e4f-96ef-99941c29ba53-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-pp4r9\" (UID: \"65cff83a-8d8f-4e4f-96ef-99941c29ba53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.770740 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/95143c61-6f91-4cd4-9411-31c2fb75d4d0-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-whh6r\" (UID: \"95143c61-6f91-4cd4-9411-31c2fb75d4d0\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.770776 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.770812 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-2g6x9\" (UID: \"0f6a7f55-84bd-4ea5-8248-4cb565904c3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.770981 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/95143c61-6f91-4cd4-9411-31c2fb75d4d0-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-whh6r\" (UID: \"95143c61-6f91-4cd4-9411-31c2fb75d4d0\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.771258 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t9rq\" (UniqueName: \"kubernetes.io/projected/95143c61-6f91-4cd4-9411-31c2fb75d4d0-kube-api-access-8t9rq\") pod \"openshift-config-operator-95bf4f4d-whh6r\" (UID: \"95143c61-6f91-4cd4-9411-31c2fb75d4d0\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.771669 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-2g6x9\" (UID: \"0f6a7f55-84bd-4ea5-8248-4cb565904c3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.771726 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-os-release\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.771782 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-node-log\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.771832 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-systemd-units\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: I0318 08:48:21.771869 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: E0318 08:48:21.772007 6976 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 08:48:21.772032 master-0 kubenswrapper[6976]: E0318 08:48:21.772073 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics podName:ca9d4694-8675-47c5-819f-89bba9dcdc0f nodeName:}" failed. No retries permitted until 2026-03-18 08:48:22.272048732 +0000 UTC m=+1.857650357 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-m862c" (UID: "ca9d4694-8675-47c5-819f-89bba9dcdc0f") : secret "marketplace-operator-metrics" not found Mar 18 08:48:21.780108 master-0 kubenswrapper[6976]: I0318 08:48:21.780070 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jndvw\" (UniqueName: \"kubernetes.io/projected/5f827195-f68d-4bd2-865b-a1f041a5c73e-kube-api-access-jndvw\") pod \"cluster-olm-operator-67dcd4998-6gj8k\" (UID: \"5f827195-f68d-4bd2-865b-a1f041a5c73e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" Mar 18 08:48:21.793606 master-0 kubenswrapper[6976]: I0318 08:48:21.793521 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2plvj\" (UniqueName: \"kubernetes.io/projected/bf7a3329-a04c-4b58-9364-b907c00cbe08-kube-api-access-2plvj\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:21.808710 master-0 kubenswrapper[6976]: I0318 08:48:21.808658 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmztj\" (UniqueName: \"kubernetes.io/projected/be2682e4-cb63-4102-a83e-ef28023e273a-kube-api-access-nmztj\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl\" (UID: \"be2682e4-cb63-4102-a83e-ef28023e273a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" Mar 18 08:48:21.829429 master-0 kubenswrapper[6976]: I0318 08:48:21.829384 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm2rc\" (UniqueName: \"kubernetes.io/projected/c5c995cf-40a0-4cd6-87fa-96a522f7bc57-kube-api-access-rm2rc\") pod \"csi-snapshot-controller-operator-5f5d689c6b-lhcpp\" (UID: \"c5c995cf-40a0-4cd6-87fa-96a522f7bc57\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-lhcpp" Mar 18 08:48:21.847940 master-0 kubenswrapper[6976]: I0318 08:48:21.847882 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c56e1ac-8752-4e46-8692-93716087f0e0-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:21.856314 master-0 kubenswrapper[6976]: I0318 08:48:21.855955 6976 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 08:48:21.867714 master-0 kubenswrapper[6976]: I0318 08:48:21.867662 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf7a3329-a04c-4b58-9364-b907c00cbe08-bound-sa-token\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:21.872445 master-0 kubenswrapper[6976]: I0318 08:48:21.872399 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-cni-bin\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.872445 master-0 kubenswrapper[6976]: I0318 08:48:21.872439 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cnibin\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:48:21.872556 master-0 kubenswrapper[6976]: I0318 08:48:21.872457 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk4w7\" (UniqueName: \"kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7\") pod \"network-check-target-7r2q2\" (UID: \"f198f770-5483-4499-abb6-06026f2c6b37\") " pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:48:21.872556 master-0 kubenswrapper[6976]: I0318 08:48:21.872489 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-cni-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.872655 master-0 kubenswrapper[6976]: I0318 08:48:21.872557 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-cni-bin\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.872655 master-0 kubenswrapper[6976]: I0318 08:48:21.872611 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-cni-bin\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.872655 master-0 kubenswrapper[6976]: I0318 08:48:21.872635 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-kubelet\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.872655 master-0 kubenswrapper[6976]: I0318 08:48:21.872650 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-systemd\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.872782 master-0 kubenswrapper[6976]: I0318 08:48:21.872665 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-ovn\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.872782 master-0 kubenswrapper[6976]: I0318 08:48:21.872678 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-cni-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.872782 master-0 kubenswrapper[6976]: I0318 08:48:21.872690 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.872782 master-0 kubenswrapper[6976]: I0318 08:48:21.872709 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cnibin\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:48:21.872782 master-0 kubenswrapper[6976]: I0318 08:48:21.872714 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/600c92a1-56c5-497b-a8f0-746830f4180e-host-slash\") pod \"iptables-alerter-vr4gq\" (UID: \"600c92a1-56c5-497b-a8f0-746830f4180e\") " pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 08:48:21.872782 master-0 kubenswrapper[6976]: I0318 08:48:21.872736 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-k8s-cni-cncf-io\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.872782 master-0 kubenswrapper[6976]: I0318 08:48:21.872751 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-slash\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.872782 master-0 kubenswrapper[6976]: I0318 08:48:21.872767 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-25rbq\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:48:21.873021 master-0 kubenswrapper[6976]: I0318 08:48:21.872797 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-os-release\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:48:21.873021 master-0 kubenswrapper[6976]: I0318 08:48:21.872815 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-conf-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.873021 master-0 kubenswrapper[6976]: I0318 08:48:21.872833 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:48:21.873021 master-0 kubenswrapper[6976]: I0318 08:48:21.872853 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-multus-certs\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.873021 master-0 kubenswrapper[6976]: I0318 08:48:21.872853 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/600c92a1-56c5-497b-a8f0-746830f4180e-host-slash\") pod \"iptables-alerter-vr4gq\" (UID: \"600c92a1-56c5-497b-a8f0-746830f4180e\") " pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 08:48:21.873021 master-0 kubenswrapper[6976]: I0318 08:48:21.872897 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-os-release\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.873021 master-0 kubenswrapper[6976]: I0318 08:48:21.872913 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-cni-bin\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.873021 master-0 kubenswrapper[6976]: I0318 08:48:21.872916 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-node-log\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.873021 master-0 kubenswrapper[6976]: I0318 08:48:21.872947 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-node-log\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.873021 master-0 kubenswrapper[6976]: I0318 08:48:21.872949 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-systemd-units\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.873021 master-0 kubenswrapper[6976]: I0318 08:48:21.872968 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-systemd-units\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.873021 master-0 kubenswrapper[6976]: I0318 08:48:21.872968 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.873021 master-0 kubenswrapper[6976]: I0318 08:48:21.872984 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.873021 master-0 kubenswrapper[6976]: I0318 08:48:21.872996 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-etc-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.873021 master-0 kubenswrapper[6976]: I0318 08:48:21.873014 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls\") pod \"dns-operator-9c5679d8f-2649q\" (UID: \"4192ea44-a38c-4b70-93c3-8070da2ffe2f\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 08:48:21.873021 master-0 kubenswrapper[6976]: I0318 08:48:21.873020 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-k8s-cni-cncf-io\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.873021 master-0 kubenswrapper[6976]: I0318 08:48:21.873031 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-kubelet\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.873021 master-0 kubenswrapper[6976]: I0318 08:48:21.873042 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-slash\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.873579 master-0 kubenswrapper[6976]: I0318 08:48:21.873048 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-var-lib-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.873579 master-0 kubenswrapper[6976]: I0318 08:48:21.873088 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85d361a2-3f83-4857-b96e-3e98fcf33463-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:48:21.873579 master-0 kubenswrapper[6976]: I0318 08:48:21.873107 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-tuning-conf-dir\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:48:21.873579 master-0 kubenswrapper[6976]: E0318 08:48:21.873109 6976 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 08:48:21.873579 master-0 kubenswrapper[6976]: E0318 08:48:21.873161 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs podName:7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac nodeName:}" failed. No retries permitted until 2026-03-18 08:48:22.373144046 +0000 UTC m=+1.958745641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-25rbq" (UID: "7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac") : secret "multus-admission-controller-secret" not found Mar 18 08:48:21.873579 master-0 kubenswrapper[6976]: I0318 08:48:21.873236 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-os-release\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:48:21.873579 master-0 kubenswrapper[6976]: I0318 08:48:21.873258 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-cnibin\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.873579 master-0 kubenswrapper[6976]: I0318 08:48:21.873277 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-log-socket\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.873579 master-0 kubenswrapper[6976]: I0318 08:48:21.873295 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:21.873579 master-0 kubenswrapper[6976]: I0318 08:48:21.873313 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-system-cni-dir\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:48:21.873579 master-0 kubenswrapper[6976]: I0318 08:48:21.873330 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-system-cni-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.873579 master-0 kubenswrapper[6976]: I0318 08:48:21.873392 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-ovn\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.873579 master-0 kubenswrapper[6976]: I0318 08:48:21.873394 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-var-lib-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.873579 master-0 kubenswrapper[6976]: I0318 08:48:21.873421 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-multus-certs\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.873579 master-0 kubenswrapper[6976]: I0318 08:48:21.873447 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-conf-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.873579 master-0 kubenswrapper[6976]: I0318 08:48:21.873457 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-os-release\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.873579 master-0 kubenswrapper[6976]: I0318 08:48:21.873483 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-cnibin\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.873579 master-0 kubenswrapper[6976]: I0318 08:48:21.873553 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-socket-dir-parent\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.873579 master-0 kubenswrapper[6976]: I0318 08:48:21.873557 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-kubelet\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.874035 master-0 kubenswrapper[6976]: E0318 08:48:21.873634 6976 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:21.874035 master-0 kubenswrapper[6976]: I0318 08:48:21.873666 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-systemd\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.874035 master-0 kubenswrapper[6976]: E0318 08:48:21.873673 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls podName:09269324-c908-474d-818f-5cd49406f1e2 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:22.373660757 +0000 UTC m=+1.959262352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-8vfjr" (UID: "09269324-c908-474d-818f-5cd49406f1e2") : secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:21.874035 master-0 kubenswrapper[6976]: I0318 08:48:21.873688 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-system-cni-dir\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:48:21.874035 master-0 kubenswrapper[6976]: I0318 08:48:21.873715 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85d361a2-3f83-4857-b96e-3e98fcf33463-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:48:21.874035 master-0 kubenswrapper[6976]: I0318 08:48:21.873742 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-tuning-conf-dir\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:48:21.874035 master-0 kubenswrapper[6976]: I0318 08:48:21.873741 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-system-cni-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.874035 master-0 kubenswrapper[6976]: I0318 08:48:21.873770 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-log-socket\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.874035 master-0 kubenswrapper[6976]: I0318 08:48:21.873784 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-socket-dir-parent\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.874035 master-0 kubenswrapper[6976]: E0318 08:48:21.873821 6976 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:48:21.874035 master-0 kubenswrapper[6976]: E0318 08:48:21.873844 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert podName:85d361a2-3f83-4857-b96e-3e98fcf33463 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:22.373835541 +0000 UTC m=+1.959437136 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert") pod "cluster-version-operator-56d8475767-t9zrr" (UID: "85d361a2-3f83-4857-b96e-3e98fcf33463") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:48:21.874035 master-0 kubenswrapper[6976]: E0318 08:48:21.873847 6976 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:21.874035 master-0 kubenswrapper[6976]: I0318 08:48:21.873864 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-etc-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.874035 master-0 kubenswrapper[6976]: I0318 08:48:21.873868 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:48:21.874035 master-0 kubenswrapper[6976]: I0318 08:48:21.873886 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-kubelet\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.874035 master-0 kubenswrapper[6976]: E0318 08:48:21.873916 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls podName:4192ea44-a38c-4b70-93c3-8070da2ffe2f nodeName:}" failed. No retries permitted until 2026-03-18 08:48:22.373906153 +0000 UTC m=+1.959507748 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls") pod "dns-operator-9c5679d8f-2649q" (UID: "4192ea44-a38c-4b70-93c3-8070da2ffe2f") : secret "metrics-tls" not found Mar 18 08:48:21.874035 master-0 kubenswrapper[6976]: E0318 08:48:21.873933 6976 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 08:48:21.874035 master-0 kubenswrapper[6976]: I0318 08:48:21.873963 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.874035 master-0 kubenswrapper[6976]: E0318 08:48:21.874000 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs podName:e48101ca-f356-45e3-93d7-4e17b8d8066c nodeName:}" failed. No retries permitted until 2026-03-18 08:48:22.373993795 +0000 UTC m=+1.959595390 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs") pod "network-metrics-daemon-2xs9n" (UID: "e48101ca-f356-45e3-93d7-4e17b8d8066c") : secret "metrics-daemon-secret" not found Mar 18 08:48:21.874514 master-0 kubenswrapper[6976]: I0318 08:48:21.874087 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-cni-multus\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.874514 master-0 kubenswrapper[6976]: I0318 08:48:21.874106 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-hostroot\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.874514 master-0 kubenswrapper[6976]: I0318 08:48:21.874120 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-cni-multus\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.874514 master-0 kubenswrapper[6976]: I0318 08:48:21.874164 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert\") pod \"catalog-operator-68f85b4d6c-hhn7l\" (UID: \"f6833a48-fccb-42bd-ac90-29f08d5bf7e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:48:21.874514 master-0 kubenswrapper[6976]: I0318 08:48:21.874198 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-run-netns\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.874514 master-0 kubenswrapper[6976]: I0318 08:48:21.874261 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-run-netns\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.874514 master-0 kubenswrapper[6976]: E0318 08:48:21.874274 6976 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 08:48:21.874514 master-0 kubenswrapper[6976]: I0318 08:48:21.874302 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85d361a2-3f83-4857-b96e-3e98fcf33463-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:48:21.874514 master-0 kubenswrapper[6976]: E0318 08:48:21.874331 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert podName:f6833a48-fccb-42bd-ac90-29f08d5bf7e8 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:22.374319122 +0000 UTC m=+1.959920717 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert") pod "catalog-operator-68f85b4d6c-hhn7l" (UID: "f6833a48-fccb-42bd-ac90-29f08d5bf7e8") : secret "catalog-operator-serving-cert" not found Mar 18 08:48:21.874514 master-0 kubenswrapper[6976]: I0318 08:48:21.874350 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-hostroot\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.874514 master-0 kubenswrapper[6976]: I0318 08:48:21.874335 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85d361a2-3f83-4857-b96e-3e98fcf33463-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:48:21.874514 master-0 kubenswrapper[6976]: I0318 08:48:21.874382 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-run-ovn-kubernetes\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.874514 master-0 kubenswrapper[6976]: I0318 08:48:21.874409 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-cni-netd\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.874514 master-0 kubenswrapper[6976]: I0318 08:48:21.874498 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-run-ovn-kubernetes\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.874862 master-0 kubenswrapper[6976]: I0318 08:48:21.874537 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-netns\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.874862 master-0 kubenswrapper[6976]: I0318 08:48:21.874577 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-cni-netd\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:21.874862 master-0 kubenswrapper[6976]: I0318 08:48:21.874475 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-netns\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.874862 master-0 kubenswrapper[6976]: I0318 08:48:21.874816 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-etc-kubernetes\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.875723 master-0 kubenswrapper[6976]: I0318 08:48:21.874925 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-etc-kubernetes\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:21.886281 master-0 kubenswrapper[6976]: I0318 08:48:21.886225 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkmb4\" (UniqueName: \"kubernetes.io/projected/1deb139f-1903-417e-835c-28abdd156cdb-kube-api-access-dkmb4\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:21.909402 master-0 kubenswrapper[6976]: I0318 08:48:21.909359 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmv75\" (UniqueName: \"kubernetes.io/projected/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-kube-api-access-nmv75\") pod \"service-ca-operator-b865698dc-fhlfx\" (UID: \"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" Mar 18 08:48:21.929583 master-0 kubenswrapper[6976]: I0318 08:48:21.929508 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkkcv\" (UniqueName: \"kubernetes.io/projected/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-kube-api-access-qkkcv\") pod \"openshift-apiserver-operator-d65958b8-m8p9p\" (UID: \"81eefe1b-f683-4740-8fb0-0a5050f9b4a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" Mar 18 08:48:21.959965 master-0 kubenswrapper[6976]: I0318 08:48:21.959895 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxgx6\" (UniqueName: \"kubernetes.io/projected/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-kube-api-access-wxgx6\") pod \"network-operator-7bd846bfc4-6rtpx\" (UID: \"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b\") " pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 08:48:21.970858 master-0 kubenswrapper[6976]: I0318 08:48:21.970816 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx9dd\" (UniqueName: \"kubernetes.io/projected/ca9d4694-8675-47c5-819f-89bba9dcdc0f-kube-api-access-rx9dd\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:21.983527 master-0 kubenswrapper[6976]: W0318 08:48:21.983480 6976 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 18 08:48:21.983715 master-0 kubenswrapper[6976]: E0318 08:48:21.983554 6976 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:48:22.005411 master-0 kubenswrapper[6976]: E0318 08:48:22.005058 6976 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 08:48:22.022023 master-0 kubenswrapper[6976]: E0318 08:48:22.021764 6976 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 08:48:22.045720 master-0 kubenswrapper[6976]: E0318 08:48:22.045508 6976 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:22.064045 master-0 kubenswrapper[6976]: E0318 08:48:22.063802 6976 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:22.110800 master-0 kubenswrapper[6976]: I0318 08:48:22.110763 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94zpt\" (UniqueName: \"kubernetes.io/projected/09269324-c908-474d-818f-5cd49406f1e2-kube-api-access-94zpt\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:22.127381 master-0 kubenswrapper[6976]: I0318 08:48:22.127349 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9mh7\" (UniqueName: \"kubernetes.io/projected/600c92a1-56c5-497b-a8f0-746830f4180e-kube-api-access-m9mh7\") pod \"iptables-alerter-vr4gq\" (UID: \"600c92a1-56c5-497b-a8f0-746830f4180e\") " pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 08:48:22.158846 master-0 kubenswrapper[6976]: I0318 08:48:22.158800 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5svd\" (UniqueName: \"kubernetes.io/projected/af1fbcf2-d4de-4015-89fc-2565e855a04d-kube-api-access-r5svd\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 08:48:22.167721 master-0 kubenswrapper[6976]: I0318 08:48:22.167677 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85d361a2-3f83-4857-b96e-3e98fcf33463-kube-api-access\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:48:22.187450 master-0 kubenswrapper[6976]: I0318 08:48:22.187420 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dn5k\" (UniqueName: \"kubernetes.io/projected/7cac1300-44c1-4a7d-8d14-efa9702ad9df-kube-api-access-7dn5k\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 08:48:22.206994 master-0 kubenswrapper[6976]: I0318 08:48:22.206972 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47cpd\" (UniqueName: \"kubernetes.io/projected/e48101ca-f356-45e3-93d7-4e17b8d8066c-kube-api-access-47cpd\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:48:22.228579 master-0 kubenswrapper[6976]: I0318 08:48:22.228534 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-xlfrc\" (UID: \"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" Mar 18 08:48:22.248875 master-0 kubenswrapper[6976]: I0318 08:48:22.248845 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqfdm\" (UniqueName: \"kubernetes.io/projected/fdd2f1fd-1a94-4f4e-a275-b075f432f763-kube-api-access-fqfdm\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 08:48:22.270740 master-0 kubenswrapper[6976]: I0318 08:48:22.270698 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77sfj\" (UniqueName: \"kubernetes.io/projected/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-kube-api-access-77sfj\") pod \"multus-admission-controller-5dbbb8b86f-25rbq\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:48:22.278126 master-0 kubenswrapper[6976]: I0318 08:48:22.277805 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:22.278126 master-0 kubenswrapper[6976]: I0318 08:48:22.278033 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:22.278126 master-0 kubenswrapper[6976]: I0318 08:48:22.278078 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:22.278126 master-0 kubenswrapper[6976]: E0318 08:48:22.278075 6976 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 08:48:22.278357 master-0 kubenswrapper[6976]: E0318 08:48:22.278193 6976 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 08:48:22.278357 master-0 kubenswrapper[6976]: E0318 08:48:22.278219 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics podName:ca9d4694-8675-47c5-819f-89bba9dcdc0f nodeName:}" failed. No retries permitted until 2026-03-18 08:48:23.278181646 +0000 UTC m=+2.863783281 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-m862c" (UID: "ca9d4694-8675-47c5-819f-89bba9dcdc0f") : secret "marketplace-operator-metrics" not found Mar 18 08:48:22.278357 master-0 kubenswrapper[6976]: E0318 08:48:22.278251 6976 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:22.278357 master-0 kubenswrapper[6976]: E0318 08:48:22.278261 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls podName:1deb139f-1903-417e-835c-28abdd156cdb nodeName:}" failed. No retries permitted until 2026-03-18 08:48:23.278243517 +0000 UTC m=+2.863845242 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-9s8lp" (UID: "1deb139f-1903-417e-835c-28abdd156cdb") : secret "node-tuning-operator-tls" not found Mar 18 08:48:22.278357 master-0 kubenswrapper[6976]: E0318 08:48:22.278296 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert podName:1deb139f-1903-417e-835c-28abdd156cdb nodeName:}" failed. No retries permitted until 2026-03-18 08:48:23.278283088 +0000 UTC m=+2.863884763 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-9s8lp" (UID: "1deb139f-1903-417e-835c-28abdd156cdb") : secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:22.278357 master-0 kubenswrapper[6976]: I0318 08:48:22.278112 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:22.278357 master-0 kubenswrapper[6976]: I0318 08:48:22.278342 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-k6xp5\" (UID: \"2d0da6e3-3887-4361-8eae-e7447f9ff72c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:48:22.278357 master-0 kubenswrapper[6976]: I0318 08:48:22.278360 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert\") pod \"olm-operator-5c9796789-twp27\" (UID: \"c00ee838-424f-482b-942f-08f0952a5ccd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:48:22.278650 master-0 kubenswrapper[6976]: I0318 08:48:22.278404 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:22.278650 master-0 kubenswrapper[6976]: E0318 08:48:22.278297 6976 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:22.278650 master-0 kubenswrapper[6976]: E0318 08:48:22.278514 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls podName:bf7a3329-a04c-4b58-9364-b907c00cbe08 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:23.278502773 +0000 UTC m=+2.864104368 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls") pod "ingress-operator-66b84d69b-4cxfh" (UID: "bf7a3329-a04c-4b58-9364-b907c00cbe08") : secret "metrics-tls" not found Mar 18 08:48:22.278650 master-0 kubenswrapper[6976]: E0318 08:48:22.278557 6976 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 08:48:22.278650 master-0 kubenswrapper[6976]: E0318 08:48:22.278611 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert podName:2d0da6e3-3887-4361-8eae-e7447f9ff72c nodeName:}" failed. No retries permitted until 2026-03-18 08:48:23.278601495 +0000 UTC m=+2.864203190 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-k6xp5" (UID: "2d0da6e3-3887-4361-8eae-e7447f9ff72c") : secret "package-server-manager-serving-cert" not found Mar 18 08:48:22.278779 master-0 kubenswrapper[6976]: E0318 08:48:22.278655 6976 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 08:48:22.278779 master-0 kubenswrapper[6976]: E0318 08:48:22.278682 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert podName:c00ee838-424f-482b-942f-08f0952a5ccd nodeName:}" failed. No retries permitted until 2026-03-18 08:48:23.278674407 +0000 UTC m=+2.864276112 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert") pod "olm-operator-5c9796789-twp27" (UID: "c00ee838-424f-482b-942f-08f0952a5ccd") : secret "olm-operator-serving-cert" not found Mar 18 08:48:22.278779 master-0 kubenswrapper[6976]: E0318 08:48:22.278486 6976 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 08:48:22.278779 master-0 kubenswrapper[6976]: E0318 08:48:22.278711 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls podName:6c56e1ac-8752-4e46-8692-93716087f0e0 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:23.278703127 +0000 UTC m=+2.864304722 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-c4lgf" (UID: "6c56e1ac-8752-4e46-8692-93716087f0e0") : secret "image-registry-operator-tls" not found Mar 18 08:48:22.289279 master-0 kubenswrapper[6976]: I0318 08:48:22.289246 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g97kq\" (UniqueName: \"kubernetes.io/projected/8dacdedc-c6ad-40d4-afdc-59a31be417fe-kube-api-access-g97kq\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:22.308579 master-0 kubenswrapper[6976]: I0318 08:48:22.308497 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnfwv\" (UniqueName: \"kubernetes.io/projected/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-kube-api-access-lnfwv\") pod \"openshift-controller-manager-operator-8c94f4649-2g6x9\" (UID: \"0f6a7f55-84bd-4ea5-8248-4cb565904c3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" Mar 18 08:48:22.326686 master-0 kubenswrapper[6976]: I0318 08:48:22.326658 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp84d\" (UniqueName: \"kubernetes.io/projected/4192ea44-a38c-4b70-93c3-8070da2ffe2f-kube-api-access-gp84d\") pod \"dns-operator-9c5679d8f-2649q\" (UID: \"4192ea44-a38c-4b70-93c3-8070da2ffe2f\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 08:48:22.348152 master-0 kubenswrapper[6976]: I0318 08:48:22.348110 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-cpbdr\" (UID: \"0f9ba06c-7a6b-4f46-a747-80b0a0b58600\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" Mar 18 08:48:22.376295 master-0 kubenswrapper[6976]: I0318 08:48:22.376253 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnzhn\" (UniqueName: \"kubernetes.io/projected/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-kube-api-access-fnzhn\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 08:48:22.380180 master-0 kubenswrapper[6976]: I0318 08:48:22.380127 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls\") pod \"dns-operator-9c5679d8f-2649q\" (UID: \"4192ea44-a38c-4b70-93c3-8070da2ffe2f\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 08:48:22.380243 master-0 kubenswrapper[6976]: I0318 08:48:22.380220 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:22.380297 master-0 kubenswrapper[6976]: I0318 08:48:22.380266 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:48:22.380329 master-0 kubenswrapper[6976]: E0318 08:48:22.380304 6976 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:22.380329 master-0 kubenswrapper[6976]: I0318 08:48:22.380322 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert\") pod \"catalog-operator-68f85b4d6c-hhn7l\" (UID: \"f6833a48-fccb-42bd-ac90-29f08d5bf7e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:48:22.380380 master-0 kubenswrapper[6976]: E0318 08:48:22.380367 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls podName:4192ea44-a38c-4b70-93c3-8070da2ffe2f nodeName:}" failed. No retries permitted until 2026-03-18 08:48:23.380349634 +0000 UTC m=+2.965951329 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls") pod "dns-operator-9c5679d8f-2649q" (UID: "4192ea44-a38c-4b70-93c3-8070da2ffe2f") : secret "metrics-tls" not found Mar 18 08:48:22.380840 master-0 kubenswrapper[6976]: E0318 08:48:22.380424 6976 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 08:48:22.380840 master-0 kubenswrapper[6976]: I0318 08:48:22.380454 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-25rbq\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:48:22.380840 master-0 kubenswrapper[6976]: E0318 08:48:22.380500 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert podName:f6833a48-fccb-42bd-ac90-29f08d5bf7e8 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:23.380482977 +0000 UTC m=+2.966084612 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert") pod "catalog-operator-68f85b4d6c-hhn7l" (UID: "f6833a48-fccb-42bd-ac90-29f08d5bf7e8") : secret "catalog-operator-serving-cert" not found Mar 18 08:48:22.380840 master-0 kubenswrapper[6976]: E0318 08:48:22.380523 6976 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 08:48:22.380840 master-0 kubenswrapper[6976]: I0318 08:48:22.380527 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:48:22.380840 master-0 kubenswrapper[6976]: E0318 08:48:22.380561 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs podName:7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac nodeName:}" failed. No retries permitted until 2026-03-18 08:48:23.380540138 +0000 UTC m=+2.966141733 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-25rbq" (UID: "7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac") : secret "multus-admission-controller-secret" not found Mar 18 08:48:22.380840 master-0 kubenswrapper[6976]: E0318 08:48:22.380638 6976 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:48:22.380840 master-0 kubenswrapper[6976]: E0318 08:48:22.380676 6976 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 08:48:22.380840 master-0 kubenswrapper[6976]: E0318 08:48:22.380678 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert podName:85d361a2-3f83-4857-b96e-3e98fcf33463 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:23.380664801 +0000 UTC m=+2.966266436 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert") pod "cluster-version-operator-56d8475767-t9zrr" (UID: "85d361a2-3f83-4857-b96e-3e98fcf33463") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:48:22.380840 master-0 kubenswrapper[6976]: E0318 08:48:22.380710 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs podName:e48101ca-f356-45e3-93d7-4e17b8d8066c nodeName:}" failed. No retries permitted until 2026-03-18 08:48:23.380701421 +0000 UTC m=+2.966303016 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs") pod "network-metrics-daemon-2xs9n" (UID: "e48101ca-f356-45e3-93d7-4e17b8d8066c") : secret "metrics-daemon-secret" not found Mar 18 08:48:22.380840 master-0 kubenswrapper[6976]: E0318 08:48:22.380736 6976 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:22.380840 master-0 kubenswrapper[6976]: E0318 08:48:22.380776 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls podName:09269324-c908-474d-818f-5cd49406f1e2 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:23.380762053 +0000 UTC m=+2.966363678 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-8vfjr" (UID: "09269324-c908-474d-818f-5cd49406f1e2") : secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:22.390313 master-0 kubenswrapper[6976]: I0318 08:48:22.390253 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2mj5\" (UniqueName: \"kubernetes.io/projected/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-kube-api-access-f2mj5\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 08:48:22.417265 master-0 kubenswrapper[6976]: I0318 08:48:22.417203 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptdsp\" (UniqueName: \"kubernetes.io/projected/e86268c9-7a83-4ccb-979a-feff00cb4b3e-kube-api-access-ptdsp\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:48:22.431500 master-0 kubenswrapper[6976]: I0318 08:48:22.431450 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-257nx\" (UniqueName: \"kubernetes.io/projected/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-kube-api-access-257nx\") pod \"catalog-operator-68f85b4d6c-hhn7l\" (UID: \"f6833a48-fccb-42bd-ac90-29f08d5bf7e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:48:22.447680 master-0 kubenswrapper[6976]: I0318 08:48:22.447600 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65cff83a-8d8f-4e4f-96ef-99941c29ba53-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-pp4r9\" (UID: \"65cff83a-8d8f-4e4f-96ef-99941c29ba53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" Mar 18 08:48:22.468682 master-0 kubenswrapper[6976]: I0318 08:48:22.468643 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t9rq\" (UniqueName: \"kubernetes.io/projected/95143c61-6f91-4cd4-9411-31c2fb75d4d0-kube-api-access-8t9rq\") pod \"openshift-config-operator-95bf4f4d-whh6r\" (UID: \"95143c61-6f91-4cd4-9411-31c2fb75d4d0\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 08:48:22.489229 master-0 kubenswrapper[6976]: I0318 08:48:22.489183 6976 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 18 08:48:22.493779 master-0 kubenswrapper[6976]: I0318 08:48:22.493680 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk4w7\" (UniqueName: \"kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7\") pod \"network-check-target-7r2q2\" (UID: \"f198f770-5483-4499-abb6-06026f2c6b37\") " pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:48:22.538705 master-0 kubenswrapper[6976]: E0318 08:48:22.538476 6976 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" Mar 18 08:48:22.538705 master-0 kubenswrapper[6976]: E0318 08:48:22.538710 6976 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-scheduler-operator-container,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302,Command:[cluster-kube-scheduler-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.31.14,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-kube-scheduler-operator-dddff6458-cpbdr_openshift-kube-scheduler-operator(0f9ba06c-7a6b-4f46-a747-80b0a0b58600): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 08:48:22.540317 master-0 kubenswrapper[6976]: E0318 08:48:22.540245 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" podUID="0f9ba06c-7a6b-4f46-a747-80b0a0b58600" Mar 18 08:48:22.556257 master-0 kubenswrapper[6976]: I0318 08:48:22.556156 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:22.584647 master-0 kubenswrapper[6976]: I0318 08:48:22.584491 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:22.759123 master-0 kubenswrapper[6976]: I0318 08:48:22.758973 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:48:23.077512 master-0 kubenswrapper[6976]: I0318 08:48:23.077456 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:23.084831 master-0 kubenswrapper[6976]: I0318 08:48:23.084785 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:23.238830 master-0 kubenswrapper[6976]: E0318 08:48:23.238755 6976 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e" Mar 18 08:48:23.239027 master-0 kubenswrapper[6976]: E0318 08:48:23.238946 6976 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openshift-apiserver-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e,Command:[cluster-openshift-apiserver-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},EnvVar{Name:KUBE_APISERVER_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkkcv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-apiserver-operator-d65958b8-m8p9p_openshift-apiserver-operator(81eefe1b-f683-4740-8fb0-0a5050f9b4a4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 08:48:23.240234 master-0 kubenswrapper[6976]: E0318 08:48:23.240146 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" podUID="81eefe1b-f683-4740-8fb0-0a5050f9b4a4" Mar 18 08:48:23.291818 master-0 kubenswrapper[6976]: I0318 08:48:23.291771 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:23.292029 master-0 kubenswrapper[6976]: I0318 08:48:23.291840 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:23.292029 master-0 kubenswrapper[6976]: I0318 08:48:23.291897 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:23.292029 master-0 kubenswrapper[6976]: I0318 08:48:23.291928 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-k6xp5\" (UID: \"2d0da6e3-3887-4361-8eae-e7447f9ff72c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:48:23.292029 master-0 kubenswrapper[6976]: I0318 08:48:23.291951 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert\") pod \"olm-operator-5c9796789-twp27\" (UID: \"c00ee838-424f-482b-942f-08f0952a5ccd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:48:23.292029 master-0 kubenswrapper[6976]: I0318 08:48:23.291975 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:23.292029 master-0 kubenswrapper[6976]: I0318 08:48:23.292028 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:23.292278 master-0 kubenswrapper[6976]: E0318 08:48:23.292136 6976 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 08:48:23.292278 master-0 kubenswrapper[6976]: E0318 08:48:23.292184 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics podName:ca9d4694-8675-47c5-819f-89bba9dcdc0f nodeName:}" failed. No retries permitted until 2026-03-18 08:48:25.292168302 +0000 UTC m=+4.877769897 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-m862c" (UID: "ca9d4694-8675-47c5-819f-89bba9dcdc0f") : secret "marketplace-operator-metrics" not found Mar 18 08:48:23.292549 master-0 kubenswrapper[6976]: E0318 08:48:23.292466 6976 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:23.292549 master-0 kubenswrapper[6976]: E0318 08:48:23.292503 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert podName:1deb139f-1903-417e-835c-28abdd156cdb nodeName:}" failed. No retries permitted until 2026-03-18 08:48:25.292493889 +0000 UTC m=+4.878095484 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-9s8lp" (UID: "1deb139f-1903-417e-835c-28abdd156cdb") : secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:23.292549 master-0 kubenswrapper[6976]: E0318 08:48:23.292549 6976 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:23.292744 master-0 kubenswrapper[6976]: E0318 08:48:23.292595 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls podName:bf7a3329-a04c-4b58-9364-b907c00cbe08 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:25.292583531 +0000 UTC m=+4.878185126 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls") pod "ingress-operator-66b84d69b-4cxfh" (UID: "bf7a3329-a04c-4b58-9364-b907c00cbe08") : secret "metrics-tls" not found Mar 18 08:48:23.292744 master-0 kubenswrapper[6976]: E0318 08:48:23.292640 6976 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 08:48:23.292744 master-0 kubenswrapper[6976]: E0318 08:48:23.292670 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls podName:1deb139f-1903-417e-835c-28abdd156cdb nodeName:}" failed. No retries permitted until 2026-03-18 08:48:25.292658993 +0000 UTC m=+4.878260588 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-9s8lp" (UID: "1deb139f-1903-417e-835c-28abdd156cdb") : secret "node-tuning-operator-tls" not found Mar 18 08:48:23.292744 master-0 kubenswrapper[6976]: E0318 08:48:23.292716 6976 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 08:48:23.292744 master-0 kubenswrapper[6976]: E0318 08:48:23.292743 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert podName:2d0da6e3-3887-4361-8eae-e7447f9ff72c nodeName:}" failed. No retries permitted until 2026-03-18 08:48:25.292734785 +0000 UTC m=+4.878336380 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-k6xp5" (UID: "2d0da6e3-3887-4361-8eae-e7447f9ff72c") : secret "package-server-manager-serving-cert" not found Mar 18 08:48:23.293012 master-0 kubenswrapper[6976]: E0318 08:48:23.292789 6976 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 08:48:23.293012 master-0 kubenswrapper[6976]: E0318 08:48:23.292833 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert podName:c00ee838-424f-482b-942f-08f0952a5ccd nodeName:}" failed. No retries permitted until 2026-03-18 08:48:25.292823867 +0000 UTC m=+4.878425462 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert") pod "olm-operator-5c9796789-twp27" (UID: "c00ee838-424f-482b-942f-08f0952a5ccd") : secret "olm-operator-serving-cert" not found Mar 18 08:48:23.293012 master-0 kubenswrapper[6976]: E0318 08:48:23.292878 6976 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 08:48:23.293012 master-0 kubenswrapper[6976]: E0318 08:48:23.292904 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls podName:6c56e1ac-8752-4e46-8692-93716087f0e0 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:25.292895948 +0000 UTC m=+4.878497543 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-c4lgf" (UID: "6c56e1ac-8752-4e46-8692-93716087f0e0") : secret "image-registry-operator-tls" not found Mar 18 08:48:23.392721 master-0 kubenswrapper[6976]: I0318 08:48:23.392611 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:23.392721 master-0 kubenswrapper[6976]: I0318 08:48:23.392660 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:48:23.392721 master-0 kubenswrapper[6976]: I0318 08:48:23.392689 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert\") pod \"catalog-operator-68f85b4d6c-hhn7l\" (UID: \"f6833a48-fccb-42bd-ac90-29f08d5bf7e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:48:23.392955 master-0 kubenswrapper[6976]: I0318 08:48:23.392746 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-25rbq\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:48:23.392955 master-0 kubenswrapper[6976]: I0318 08:48:23.392767 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:48:23.392955 master-0 kubenswrapper[6976]: I0318 08:48:23.392796 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls\") pod \"dns-operator-9c5679d8f-2649q\" (UID: \"4192ea44-a38c-4b70-93c3-8070da2ffe2f\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 08:48:23.392955 master-0 kubenswrapper[6976]: E0318 08:48:23.392912 6976 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:23.393055 master-0 kubenswrapper[6976]: E0318 08:48:23.392960 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls podName:4192ea44-a38c-4b70-93c3-8070da2ffe2f nodeName:}" failed. No retries permitted until 2026-03-18 08:48:25.392941798 +0000 UTC m=+4.978543393 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls") pod "dns-operator-9c5679d8f-2649q" (UID: "4192ea44-a38c-4b70-93c3-8070da2ffe2f") : secret "metrics-tls" not found Mar 18 08:48:23.393328 master-0 kubenswrapper[6976]: E0318 08:48:23.393293 6976 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 08:48:23.393368 master-0 kubenswrapper[6976]: E0318 08:48:23.393346 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert podName:f6833a48-fccb-42bd-ac90-29f08d5bf7e8 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:25.393337017 +0000 UTC m=+4.978938612 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert") pod "catalog-operator-68f85b4d6c-hhn7l" (UID: "f6833a48-fccb-42bd-ac90-29f08d5bf7e8") : secret "catalog-operator-serving-cert" not found Mar 18 08:48:23.393425 master-0 kubenswrapper[6976]: E0318 08:48:23.393404 6976 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:23.393462 master-0 kubenswrapper[6976]: E0318 08:48:23.393434 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls podName:09269324-c908-474d-818f-5cd49406f1e2 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:25.393426489 +0000 UTC m=+4.979028084 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-8vfjr" (UID: "09269324-c908-474d-818f-5cd49406f1e2") : secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:23.393506 master-0 kubenswrapper[6976]: E0318 08:48:23.393490 6976 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 08:48:23.393543 master-0 kubenswrapper[6976]: E0318 08:48:23.393516 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs podName:e48101ca-f356-45e3-93d7-4e17b8d8066c nodeName:}" failed. No retries permitted until 2026-03-18 08:48:25.393508441 +0000 UTC m=+4.979110036 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs") pod "network-metrics-daemon-2xs9n" (UID: "e48101ca-f356-45e3-93d7-4e17b8d8066c") : secret "metrics-daemon-secret" not found Mar 18 08:48:23.393592 master-0 kubenswrapper[6976]: E0318 08:48:23.393580 6976 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:48:23.393620 master-0 kubenswrapper[6976]: E0318 08:48:23.393603 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert podName:85d361a2-3f83-4857-b96e-3e98fcf33463 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:25.393596563 +0000 UTC m=+4.979198158 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert") pod "cluster-version-operator-56d8475767-t9zrr" (UID: "85d361a2-3f83-4857-b96e-3e98fcf33463") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:48:23.393673 master-0 kubenswrapper[6976]: E0318 08:48:23.393657 6976 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 08:48:23.393706 master-0 kubenswrapper[6976]: E0318 08:48:23.393680 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs podName:7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac nodeName:}" failed. No retries permitted until 2026-03-18 08:48:25.393673795 +0000 UTC m=+4.979275380 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-25rbq" (UID: "7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac") : secret "multus-admission-controller-secret" not found Mar 18 08:48:23.558974 master-0 kubenswrapper[6976]: I0318 08:48:23.558777 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:23.562012 master-0 kubenswrapper[6976]: I0318 08:48:23.561970 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 08:48:23.565057 master-0 kubenswrapper[6976]: I0318 08:48:23.564946 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:23.663705 master-0 kubenswrapper[6976]: I0318 08:48:23.663605 6976 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 08:48:23.663705 master-0 kubenswrapper[6976]: I0318 08:48:23.663629 6976 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 08:48:23.783270 master-0 kubenswrapper[6976]: I0318 08:48:23.783221 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:23.787449 master-0 kubenswrapper[6976]: I0318 08:48:23.787193 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:23.789977 master-0 kubenswrapper[6976]: E0318 08:48:23.789924 6976 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263" Mar 18 08:48:23.790125 master-0 kubenswrapper[6976]: E0318 08:48:23.790079 6976 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:service-ca-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263,Command:[service-ca-operator operator],Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{83886080 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nmv75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-ca-operator-b865698dc-fhlfx_openshift-service-ca-operator(b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 08:48:23.791692 master-0 kubenswrapper[6976]: E0318 08:48:23.791652 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" podUID="b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd" Mar 18 08:48:24.393737 master-0 kubenswrapper[6976]: E0318 08:48:24.393666 6976 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1739496388/1\": happened during read: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69" Mar 18 08:48:24.393947 master-0 kubenswrapper[6976]: E0318 08:48:24.393872 6976 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openshift-config-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69,Command:[cluster-config-operator operator --operator-version=$(OPERATOR_IMAGE_VERSION) --authoritative-feature-gate-dir=/available-featuregates],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:available-featuregates,ReadOnly:false,MountPath:/available-featuregates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8t9rq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:1,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:1,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-config-operator-95bf4f4d-whh6r_openshift-config-operator(95143c61-6f91-4cd4-9411-31c2fb75d4d0): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1739496388/1\": happened during read: context canceled" logger="UnhandledError" Mar 18 08:48:24.395812 master-0 kubenswrapper[6976]: E0318 08:48:24.395667 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage1739496388/1\\\": happened during read: context canceled\"" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" podUID="95143c61-6f91-4cd4-9411-31c2fb75d4d0" Mar 18 08:48:24.405056 master-0 kubenswrapper[6976]: E0318 08:48:24.405017 6976 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71" Mar 18 08:48:24.405228 master-0 kubenswrapper[6976]: E0318 08:48:24.405186 6976 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openshift-controller-manager-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71,Command:[cluster-openshift-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.35,ValueFrom:nil,},EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},EnvVar{Name:ROUTE_CONTROLLER_MANAGER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lnfwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-controller-manager-operator-8c94f4649-2g6x9_openshift-controller-manager-operator(0f6a7f55-84bd-4ea5-8248-4cb565904c3b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 08:48:24.406415 master-0 kubenswrapper[6976]: E0318 08:48:24.406374 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" podUID="0f6a7f55-84bd-4ea5-8248-4cb565904c3b" Mar 18 08:48:24.669596 master-0 kubenswrapper[6976]: I0318 08:48:24.669490 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:25.314478 master-0 kubenswrapper[6976]: I0318 08:48:25.314370 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:25.314784 master-0 kubenswrapper[6976]: E0318 08:48:25.314549 6976 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 08:48:25.314784 master-0 kubenswrapper[6976]: I0318 08:48:25.314622 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-k6xp5\" (UID: \"2d0da6e3-3887-4361-8eae-e7447f9ff72c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:48:25.314784 master-0 kubenswrapper[6976]: E0318 08:48:25.314699 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls podName:1deb139f-1903-417e-835c-28abdd156cdb nodeName:}" failed. No retries permitted until 2026-03-18 08:48:29.314668161 +0000 UTC m=+8.900269796 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-9s8lp" (UID: "1deb139f-1903-417e-835c-28abdd156cdb") : secret "node-tuning-operator-tls" not found Mar 18 08:48:25.314784 master-0 kubenswrapper[6976]: I0318 08:48:25.314741 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert\") pod \"olm-operator-5c9796789-twp27\" (UID: \"c00ee838-424f-482b-942f-08f0952a5ccd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:48:25.314784 master-0 kubenswrapper[6976]: E0318 08:48:25.314758 6976 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 08:48:25.315246 master-0 kubenswrapper[6976]: I0318 08:48:25.314793 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:25.315246 master-0 kubenswrapper[6976]: E0318 08:48:25.314848 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert podName:2d0da6e3-3887-4361-8eae-e7447f9ff72c nodeName:}" failed. No retries permitted until 2026-03-18 08:48:29.314816594 +0000 UTC m=+8.900418239 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-k6xp5" (UID: "2d0da6e3-3887-4361-8eae-e7447f9ff72c") : secret "package-server-manager-serving-cert" not found Mar 18 08:48:25.315246 master-0 kubenswrapper[6976]: E0318 08:48:25.314972 6976 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 08:48:25.315246 master-0 kubenswrapper[6976]: E0318 08:48:25.315065 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert podName:c00ee838-424f-482b-942f-08f0952a5ccd nodeName:}" failed. No retries permitted until 2026-03-18 08:48:29.315036419 +0000 UTC m=+8.900638054 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert") pod "olm-operator-5c9796789-twp27" (UID: "c00ee838-424f-482b-942f-08f0952a5ccd") : secret "olm-operator-serving-cert" not found Mar 18 08:48:25.315246 master-0 kubenswrapper[6976]: I0318 08:48:25.314973 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:25.315246 master-0 kubenswrapper[6976]: I0318 08:48:25.315126 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:25.315246 master-0 kubenswrapper[6976]: I0318 08:48:25.315175 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:25.315246 master-0 kubenswrapper[6976]: E0318 08:48:25.315082 6976 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 08:48:25.315812 master-0 kubenswrapper[6976]: E0318 08:48:25.315280 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics podName:ca9d4694-8675-47c5-819f-89bba9dcdc0f nodeName:}" failed. No retries permitted until 2026-03-18 08:48:29.315250014 +0000 UTC m=+8.900851659 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-m862c" (UID: "ca9d4694-8675-47c5-819f-89bba9dcdc0f") : secret "marketplace-operator-metrics" not found Mar 18 08:48:25.316698 master-0 kubenswrapper[6976]: E0318 08:48:25.315080 6976 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 08:48:25.316773 master-0 kubenswrapper[6976]: E0318 08:48:25.316730 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls podName:6c56e1ac-8752-4e46-8692-93716087f0e0 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:29.316709846 +0000 UTC m=+8.902311481 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-c4lgf" (UID: "6c56e1ac-8752-4e46-8692-93716087f0e0") : secret "image-registry-operator-tls" not found Mar 18 08:48:25.316773 master-0 kubenswrapper[6976]: E0318 08:48:25.315341 6976 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:25.316923 master-0 kubenswrapper[6976]: E0318 08:48:25.316780 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert podName:1deb139f-1903-417e-835c-28abdd156cdb nodeName:}" failed. No retries permitted until 2026-03-18 08:48:29.316768668 +0000 UTC m=+8.902370293 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-9s8lp" (UID: "1deb139f-1903-417e-835c-28abdd156cdb") : secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:25.316923 master-0 kubenswrapper[6976]: E0318 08:48:25.315431 6976 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:25.316923 master-0 kubenswrapper[6976]: E0318 08:48:25.316868 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls podName:bf7a3329-a04c-4b58-9364-b907c00cbe08 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:29.316845689 +0000 UTC m=+8.902447314 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls") pod "ingress-operator-66b84d69b-4cxfh" (UID: "bf7a3329-a04c-4b58-9364-b907c00cbe08") : secret "metrics-tls" not found Mar 18 08:48:25.416270 master-0 kubenswrapper[6976]: I0318 08:48:25.416140 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls\") pod \"dns-operator-9c5679d8f-2649q\" (UID: \"4192ea44-a38c-4b70-93c3-8070da2ffe2f\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 08:48:25.416630 master-0 kubenswrapper[6976]: E0318 08:48:25.416348 6976 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:25.416630 master-0 kubenswrapper[6976]: I0318 08:48:25.416432 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:25.416630 master-0 kubenswrapper[6976]: E0318 08:48:25.416456 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls podName:4192ea44-a38c-4b70-93c3-8070da2ffe2f nodeName:}" failed. No retries permitted until 2026-03-18 08:48:29.416424649 +0000 UTC m=+9.002026284 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls") pod "dns-operator-9c5679d8f-2649q" (UID: "4192ea44-a38c-4b70-93c3-8070da2ffe2f") : secret "metrics-tls" not found Mar 18 08:48:25.416630 master-0 kubenswrapper[6976]: I0318 08:48:25.416535 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:48:25.416630 master-0 kubenswrapper[6976]: E0318 08:48:25.416606 6976 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:25.417017 master-0 kubenswrapper[6976]: I0318 08:48:25.416684 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert\") pod \"catalog-operator-68f85b4d6c-hhn7l\" (UID: \"f6833a48-fccb-42bd-ac90-29f08d5bf7e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:48:25.417017 master-0 kubenswrapper[6976]: E0318 08:48:25.416716 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls podName:09269324-c908-474d-818f-5cd49406f1e2 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:29.416692825 +0000 UTC m=+9.002294490 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-8vfjr" (UID: "09269324-c908-474d-818f-5cd49406f1e2") : secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:25.417017 master-0 kubenswrapper[6976]: E0318 08:48:25.416729 6976 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 08:48:25.417017 master-0 kubenswrapper[6976]: E0318 08:48:25.416795 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs podName:e48101ca-f356-45e3-93d7-4e17b8d8066c nodeName:}" failed. No retries permitted until 2026-03-18 08:48:29.416774167 +0000 UTC m=+9.002375792 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs") pod "network-metrics-daemon-2xs9n" (UID: "e48101ca-f356-45e3-93d7-4e17b8d8066c") : secret "metrics-daemon-secret" not found Mar 18 08:48:25.417017 master-0 kubenswrapper[6976]: I0318 08:48:25.416891 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-25rbq\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:48:25.417017 master-0 kubenswrapper[6976]: I0318 08:48:25.416938 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:48:25.417017 master-0 kubenswrapper[6976]: E0318 08:48:25.416947 6976 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 08:48:25.417017 master-0 kubenswrapper[6976]: E0318 08:48:25.416994 6976 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 08:48:25.417017 master-0 kubenswrapper[6976]: E0318 08:48:25.417022 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert podName:f6833a48-fccb-42bd-ac90-29f08d5bf7e8 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:29.416999262 +0000 UTC m=+9.002600897 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert") pod "catalog-operator-68f85b4d6c-hhn7l" (UID: "f6833a48-fccb-42bd-ac90-29f08d5bf7e8") : secret "catalog-operator-serving-cert" not found Mar 18 08:48:25.417550 master-0 kubenswrapper[6976]: E0318 08:48:25.417044 6976 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:48:25.417550 master-0 kubenswrapper[6976]: E0318 08:48:25.417067 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs podName:7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac nodeName:}" failed. No retries permitted until 2026-03-18 08:48:29.417045873 +0000 UTC m=+9.002647518 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-25rbq" (UID: "7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac") : secret "multus-admission-controller-secret" not found Mar 18 08:48:25.417550 master-0 kubenswrapper[6976]: E0318 08:48:25.417100 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert podName:85d361a2-3f83-4857-b96e-3e98fcf33463 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:29.417083884 +0000 UTC m=+9.002685509 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert") pod "cluster-version-operator-56d8475767-t9zrr" (UID: "85d361a2-3f83-4857-b96e-3e98fcf33463") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:48:25.690319 master-0 kubenswrapper[6976]: E0318 08:48:25.690096 6976 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:c410a23bb5a5d652f8244e076bdaceea0e0377dddd221f3ece763abe08031cb3: Get \"https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:c410a23bb5a5d652f8244e076bdaceea0e0377dddd221f3ece763abe08031cb3\": context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3" Mar 18 08:48:25.695330 master-0 kubenswrapper[6976]: E0318 08:48:25.690496 6976 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:csi-snapshot-controller-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3,Command:[],Args:[start -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERAND_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24,ValueFrom:nil,},EnvVar{Name:WEBHOOK_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e46378af340ca82a8551fdfa20d0acf4ff4a5d43ceb0d4748eebc55be437d04,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rm2rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-snapshot-controller-operator-5f5d689c6b-lhcpp_openshift-cluster-storage-operator(c5c995cf-40a0-4cd6-87fa-96a522f7bc57): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:c410a23bb5a5d652f8244e076bdaceea0e0377dddd221f3ece763abe08031cb3: Get \"https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:c410a23bb5a5d652f8244e076bdaceea0e0377dddd221f3ece763abe08031cb3\": context canceled" logger="UnhandledError" Mar 18 08:48:25.698253 master-0 kubenswrapper[6976]: E0318 08:48:25.698143 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-snapshot-controller-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:c410a23bb5a5d652f8244e076bdaceea0e0377dddd221f3ece763abe08031cb3: Get \\\"https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:c410a23bb5a5d652f8244e076bdaceea0e0377dddd221f3ece763abe08031cb3\\\": context canceled\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-lhcpp" podUID="c5c995cf-40a0-4cd6-87fa-96a522f7bc57" Mar 18 08:48:25.956604 master-0 kubenswrapper[6976]: E0318 08:48:25.955795 6976 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55" Mar 18 08:48:25.956604 master-0 kubenswrapper[6976]: E0318 08:48:25.956379 6976 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m9mh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-vr4gq_openshift-network-operator(600c92a1-56c5-497b-a8f0-746830f4180e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 18 08:48:25.958077 master-0 kubenswrapper[6976]: E0318 08:48:25.957717 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-network-operator/iptables-alerter-vr4gq" podUID="600c92a1-56c5-497b-a8f0-746830f4180e" Mar 18 08:48:26.678451 master-0 kubenswrapper[6976]: I0318 08:48:26.678383 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" event={"ID":"e86268c9-7a83-4ccb-979a-feff00cb4b3e","Type":"ContainerStarted","Data":"3d8ad760fd65b5f908f215917d29cb979183ae8b744d77b212db9eae8e3db7ef"} Mar 18 08:48:26.680537 master-0 kubenswrapper[6976]: I0318 08:48:26.680473 6976 generic.go:334] "Generic (PLEG): container finished" podID="5f827195-f68d-4bd2-865b-a1f041a5c73e" containerID="4dce06688697b9f6ea7f2ce75cbd0f8dd6b27c169d4036f6f09223ce6b7ed156" exitCode=0 Mar 18 08:48:26.680663 master-0 kubenswrapper[6976]: I0318 08:48:26.680617 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" event={"ID":"5f827195-f68d-4bd2-865b-a1f041a5c73e","Type":"ContainerDied","Data":"4dce06688697b9f6ea7f2ce75cbd0f8dd6b27c169d4036f6f09223ce6b7ed156"} Mar 18 08:48:27.223660 master-0 kubenswrapper[6976]: I0318 08:48:27.221169 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-7r2q2"] Mar 18 08:48:27.654390 master-0 kubenswrapper[6976]: W0318 08:48:27.654248 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf198f770_5483_4499_abb6_06026f2c6b37.slice/crio-b5d41e3233b622c13ba073282af1bdf3d224e46b75a003c04d3f6b78e4a19cd2 WatchSource:0}: Error finding container b5d41e3233b622c13ba073282af1bdf3d224e46b75a003c04d3f6b78e4a19cd2: Status 404 returned error can't find the container with id b5d41e3233b622c13ba073282af1bdf3d224e46b75a003c04d3f6b78e4a19cd2 Mar 18 08:48:27.686208 master-0 kubenswrapper[6976]: I0318 08:48:27.685549 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-7r2q2" event={"ID":"f198f770-5483-4499-abb6-06026f2c6b37","Type":"ContainerStarted","Data":"b5d41e3233b622c13ba073282af1bdf3d224e46b75a003c04d3f6b78e4a19cd2"} Mar 18 08:48:27.735860 master-0 kubenswrapper[6976]: I0318 08:48:27.735294 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:27.735860 master-0 kubenswrapper[6976]: I0318 08:48:27.735526 6976 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 08:48:27.735860 master-0 kubenswrapper[6976]: I0318 08:48:27.735544 6976 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 08:48:27.810377 master-0 kubenswrapper[6976]: I0318 08:48:27.810332 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:28.690900 master-0 kubenswrapper[6976]: I0318 08:48:28.690848 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" event={"ID":"be2682e4-cb63-4102-a83e-ef28023e273a","Type":"ContainerStarted","Data":"8ff399eba975fe3e4ac2c3d81b3e52845b1835ad72d3a17e7e74d5e7eca9397d"} Mar 18 08:48:28.694780 master-0 kubenswrapper[6976]: I0318 08:48:28.694650 6976 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 08:48:28.694958 master-0 kubenswrapper[6976]: I0318 08:48:28.694931 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-7r2q2" event={"ID":"f198f770-5483-4499-abb6-06026f2c6b37","Type":"ContainerStarted","Data":"10dc50f1d695165e5fe3bd77b781bfcbbbac3c9e634c31cd2906ed8aae89316d"} Mar 18 08:48:29.003851 master-0 kubenswrapper[6976]: I0318 08:48:29.003655 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:29.008332 master-0 kubenswrapper[6976]: I0318 08:48:29.008300 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:29.374148 master-0 kubenswrapper[6976]: I0318 08:48:29.373421 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:29.374148 master-0 kubenswrapper[6976]: I0318 08:48:29.373504 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:29.374148 master-0 kubenswrapper[6976]: E0318 08:48:29.373553 6976 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 08:48:29.374148 master-0 kubenswrapper[6976]: E0318 08:48:29.373626 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics podName:ca9d4694-8675-47c5-819f-89bba9dcdc0f nodeName:}" failed. No retries permitted until 2026-03-18 08:48:37.373610452 +0000 UTC m=+16.959212047 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-m862c" (UID: "ca9d4694-8675-47c5-819f-89bba9dcdc0f") : secret "marketplace-operator-metrics" not found Mar 18 08:48:29.374148 master-0 kubenswrapper[6976]: E0318 08:48:29.373638 6976 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:29.374148 master-0 kubenswrapper[6976]: E0318 08:48:29.373691 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls podName:bf7a3329-a04c-4b58-9364-b907c00cbe08 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:37.373676674 +0000 UTC m=+16.959278269 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls") pod "ingress-operator-66b84d69b-4cxfh" (UID: "bf7a3329-a04c-4b58-9364-b907c00cbe08") : secret "metrics-tls" not found Mar 18 08:48:29.374148 master-0 kubenswrapper[6976]: E0318 08:48:29.373744 6976 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:29.374148 master-0 kubenswrapper[6976]: E0318 08:48:29.373802 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert podName:1deb139f-1903-417e-835c-28abdd156cdb nodeName:}" failed. No retries permitted until 2026-03-18 08:48:37.373763696 +0000 UTC m=+16.959365291 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-9s8lp" (UID: "1deb139f-1903-417e-835c-28abdd156cdb") : secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:29.374148 master-0 kubenswrapper[6976]: I0318 08:48:29.373553 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:29.374148 master-0 kubenswrapper[6976]: I0318 08:48:29.373865 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:29.374148 master-0 kubenswrapper[6976]: I0318 08:48:29.373903 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-k6xp5\" (UID: \"2d0da6e3-3887-4361-8eae-e7447f9ff72c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:48:29.374148 master-0 kubenswrapper[6976]: I0318 08:48:29.373929 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert\") pod \"olm-operator-5c9796789-twp27\" (UID: \"c00ee838-424f-482b-942f-08f0952a5ccd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:48:29.374148 master-0 kubenswrapper[6976]: I0318 08:48:29.373953 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:29.374148 master-0 kubenswrapper[6976]: E0318 08:48:29.373962 6976 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 08:48:29.374148 master-0 kubenswrapper[6976]: E0318 08:48:29.373987 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls podName:1deb139f-1903-417e-835c-28abdd156cdb nodeName:}" failed. No retries permitted until 2026-03-18 08:48:37.373979441 +0000 UTC m=+16.959581036 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-9s8lp" (UID: "1deb139f-1903-417e-835c-28abdd156cdb") : secret "node-tuning-operator-tls" not found Mar 18 08:48:29.374148 master-0 kubenswrapper[6976]: E0318 08:48:29.374007 6976 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 08:48:29.374148 master-0 kubenswrapper[6976]: E0318 08:48:29.374023 6976 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 08:48:29.374148 master-0 kubenswrapper[6976]: E0318 08:48:29.374028 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls podName:6c56e1ac-8752-4e46-8692-93716087f0e0 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:37.374021491 +0000 UTC m=+16.959623086 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-c4lgf" (UID: "6c56e1ac-8752-4e46-8692-93716087f0e0") : secret "image-registry-operator-tls" not found Mar 18 08:48:29.374148 master-0 kubenswrapper[6976]: E0318 08:48:29.374043 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert podName:2d0da6e3-3887-4361-8eae-e7447f9ff72c nodeName:}" failed. No retries permitted until 2026-03-18 08:48:37.374037052 +0000 UTC m=+16.959638647 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-k6xp5" (UID: "2d0da6e3-3887-4361-8eae-e7447f9ff72c") : secret "package-server-manager-serving-cert" not found Mar 18 08:48:29.374148 master-0 kubenswrapper[6976]: E0318 08:48:29.374067 6976 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 08:48:29.374148 master-0 kubenswrapper[6976]: E0318 08:48:29.374091 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert podName:c00ee838-424f-482b-942f-08f0952a5ccd nodeName:}" failed. No retries permitted until 2026-03-18 08:48:37.374082533 +0000 UTC m=+16.959684128 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert") pod "olm-operator-5c9796789-twp27" (UID: "c00ee838-424f-482b-942f-08f0952a5ccd") : secret "olm-operator-serving-cert" not found Mar 18 08:48:29.474826 master-0 kubenswrapper[6976]: I0318 08:48:29.474674 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-25rbq\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:48:29.474826 master-0 kubenswrapper[6976]: I0318 08:48:29.474812 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:48:29.475061 master-0 kubenswrapper[6976]: I0318 08:48:29.474843 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls\") pod \"dns-operator-9c5679d8f-2649q\" (UID: \"4192ea44-a38c-4b70-93c3-8070da2ffe2f\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 08:48:29.475240 master-0 kubenswrapper[6976]: E0318 08:48:29.475100 6976 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 08:48:29.475240 master-0 kubenswrapper[6976]: I0318 08:48:29.475178 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:29.475240 master-0 kubenswrapper[6976]: E0318 08:48:29.475219 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs podName:7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac nodeName:}" failed. No retries permitted until 2026-03-18 08:48:37.475188527 +0000 UTC m=+17.060790162 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-25rbq" (UID: "7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac") : secret "multus-admission-controller-secret" not found Mar 18 08:48:29.475386 master-0 kubenswrapper[6976]: I0318 08:48:29.475264 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:48:29.475386 master-0 kubenswrapper[6976]: E0318 08:48:29.475314 6976 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:29.475386 master-0 kubenswrapper[6976]: I0318 08:48:29.475349 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert\") pod \"catalog-operator-68f85b4d6c-hhn7l\" (UID: \"f6833a48-fccb-42bd-ac90-29f08d5bf7e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:48:29.475386 master-0 kubenswrapper[6976]: E0318 08:48:29.475376 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls podName:4192ea44-a38c-4b70-93c3-8070da2ffe2f nodeName:}" failed. No retries permitted until 2026-03-18 08:48:37.475358211 +0000 UTC m=+17.060959896 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls") pod "dns-operator-9c5679d8f-2649q" (UID: "4192ea44-a38c-4b70-93c3-8070da2ffe2f") : secret "metrics-tls" not found Mar 18 08:48:29.475633 master-0 kubenswrapper[6976]: E0318 08:48:29.475454 6976 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 08:48:29.475633 master-0 kubenswrapper[6976]: E0318 08:48:29.475502 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs podName:e48101ca-f356-45e3-93d7-4e17b8d8066c nodeName:}" failed. No retries permitted until 2026-03-18 08:48:37.475487374 +0000 UTC m=+17.061088999 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs") pod "network-metrics-daemon-2xs9n" (UID: "e48101ca-f356-45e3-93d7-4e17b8d8066c") : secret "metrics-daemon-secret" not found Mar 18 08:48:29.475874 master-0 kubenswrapper[6976]: E0318 08:48:29.475851 6976 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 08:48:29.475950 master-0 kubenswrapper[6976]: E0318 08:48:29.475905 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert podName:f6833a48-fccb-42bd-ac90-29f08d5bf7e8 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:37.475894253 +0000 UTC m=+17.061495958 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert") pod "catalog-operator-68f85b4d6c-hhn7l" (UID: "f6833a48-fccb-42bd-ac90-29f08d5bf7e8") : secret "catalog-operator-serving-cert" not found Mar 18 08:48:29.475950 master-0 kubenswrapper[6976]: E0318 08:48:29.475949 6976 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:48:29.476037 master-0 kubenswrapper[6976]: E0318 08:48:29.475977 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert podName:85d361a2-3f83-4857-b96e-3e98fcf33463 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:37.475966984 +0000 UTC m=+17.061568749 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert") pod "cluster-version-operator-56d8475767-t9zrr" (UID: "85d361a2-3f83-4857-b96e-3e98fcf33463") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:48:29.477054 master-0 kubenswrapper[6976]: E0318 08:48:29.476681 6976 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:29.477054 master-0 kubenswrapper[6976]: E0318 08:48:29.476728 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls podName:09269324-c908-474d-818f-5cd49406f1e2 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:37.476714321 +0000 UTC m=+17.062316006 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-8vfjr" (UID: "09269324-c908-474d-818f-5cd49406f1e2") : secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:29.584668 master-0 kubenswrapper[6976]: I0318 08:48:29.572235 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-sbsqg"] Mar 18 08:48:29.584668 master-0 kubenswrapper[6976]: E0318 08:48:29.572368 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eeb8b56-2c99-4cac-8b32-dd51c94e53ba" containerName="prober" Mar 18 08:48:29.584668 master-0 kubenswrapper[6976]: I0318 08:48:29.572380 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eeb8b56-2c99-4cac-8b32-dd51c94e53ba" containerName="prober" Mar 18 08:48:29.584668 master-0 kubenswrapper[6976]: E0318 08:48:29.572391 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c9de07b-1ef1-4228-b310-1007d999dc7b" containerName="assisted-installer-controller" Mar 18 08:48:29.584668 master-0 kubenswrapper[6976]: I0318 08:48:29.572397 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c9de07b-1ef1-4228-b310-1007d999dc7b" containerName="assisted-installer-controller" Mar 18 08:48:29.584668 master-0 kubenswrapper[6976]: I0318 08:48:29.572447 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eeb8b56-2c99-4cac-8b32-dd51c94e53ba" containerName="prober" Mar 18 08:48:29.584668 master-0 kubenswrapper[6976]: I0318 08:48:29.572457 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c9de07b-1ef1-4228-b310-1007d999dc7b" containerName="assisted-installer-controller" Mar 18 08:48:29.584668 master-0 kubenswrapper[6976]: I0318 08:48:29.572813 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-8487694857-sbsqg" Mar 18 08:48:29.584668 master-0 kubenswrapper[6976]: I0318 08:48:29.574674 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 08:48:29.584668 master-0 kubenswrapper[6976]: I0318 08:48:29.579399 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 08:48:29.584668 master-0 kubenswrapper[6976]: I0318 08:48:29.584357 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-sbsqg"] Mar 18 08:48:29.677416 master-0 kubenswrapper[6976]: I0318 08:48:29.677321 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zx99\" (UniqueName: \"kubernetes.io/projected/c6176328-5931-405b-8519-8e4bc83bedfb-kube-api-access-5zx99\") pod \"migrator-8487694857-sbsqg\" (UID: \"c6176328-5931-405b-8519-8e4bc83bedfb\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-sbsqg" Mar 18 08:48:29.698708 master-0 kubenswrapper[6976]: I0318 08:48:29.698669 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" event={"ID":"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac","Type":"ContainerStarted","Data":"cdf9805777db651916bc0fbdb03aeca74e0291990d89a5792cd9c2058bcbad82"} Mar 18 08:48:29.700405 master-0 kubenswrapper[6976]: I0318 08:48:29.700381 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" event={"ID":"bb6ef4c4-bff3-4559-8e42-582bbd668b7c","Type":"ContainerStarted","Data":"9cdce5f3b67476e4d83692d6a7f121d082ca7bc4e1f5227b44f8955003a46e71"} Mar 18 08:48:29.700509 master-0 kubenswrapper[6976]: I0318 08:48:29.700481 6976 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 08:48:29.778262 master-0 kubenswrapper[6976]: I0318 08:48:29.778198 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zx99\" (UniqueName: \"kubernetes.io/projected/c6176328-5931-405b-8519-8e4bc83bedfb-kube-api-access-5zx99\") pod \"migrator-8487694857-sbsqg\" (UID: \"c6176328-5931-405b-8519-8e4bc83bedfb\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-sbsqg" Mar 18 08:48:29.797869 master-0 kubenswrapper[6976]: I0318 08:48:29.797660 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zx99\" (UniqueName: \"kubernetes.io/projected/c6176328-5931-405b-8519-8e4bc83bedfb-kube-api-access-5zx99\") pod \"migrator-8487694857-sbsqg\" (UID: \"c6176328-5931-405b-8519-8e4bc83bedfb\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-sbsqg" Mar 18 08:48:29.914859 master-0 kubenswrapper[6976]: I0318 08:48:29.914800 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-8487694857-sbsqg" Mar 18 08:48:30.177423 master-0 kubenswrapper[6976]: I0318 08:48:30.176949 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-sbsqg"] Mar 18 08:48:30.196966 master-0 kubenswrapper[6976]: W0318 08:48:30.196726 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6176328_5931_405b_8519_8e4bc83bedfb.slice/crio-3bf63c21f45da93caf06a2a338ffeb21874020b8683b0b12c95244b028fbf72a WatchSource:0}: Error finding container 3bf63c21f45da93caf06a2a338ffeb21874020b8683b0b12c95244b028fbf72a: Status 404 returned error can't find the container with id 3bf63c21f45da93caf06a2a338ffeb21874020b8683b0b12c95244b028fbf72a Mar 18 08:48:30.290065 master-0 kubenswrapper[6976]: I0318 08:48:30.289946 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:30.290768 master-0 kubenswrapper[6976]: I0318 08:48:30.290195 6976 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 08:48:30.323624 master-0 kubenswrapper[6976]: I0318 08:48:30.323559 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 08:48:30.704100 master-0 kubenswrapper[6976]: I0318 08:48:30.704037 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-sbsqg" event={"ID":"c6176328-5931-405b-8519-8e4bc83bedfb","Type":"ContainerStarted","Data":"3bf63c21f45da93caf06a2a338ffeb21874020b8683b0b12c95244b028fbf72a"} Mar 18 08:48:30.705859 master-0 kubenswrapper[6976]: I0318 08:48:30.705824 6976 generic.go:334] "Generic (PLEG): container finished" podID="5f827195-f68d-4bd2-865b-a1f041a5c73e" containerID="a380f5739da0f5e27b1b8f3bd34b12b88446dd93b791869bfaf36182d6421c5b" exitCode=0 Mar 18 08:48:30.705917 master-0 kubenswrapper[6976]: I0318 08:48:30.705892 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" event={"ID":"5f827195-f68d-4bd2-865b-a1f041a5c73e","Type":"ContainerDied","Data":"a380f5739da0f5e27b1b8f3bd34b12b88446dd93b791869bfaf36182d6421c5b"} Mar 18 08:48:31.294468 master-0 kubenswrapper[6976]: I0318 08:48:31.294356 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:31.294468 master-0 kubenswrapper[6976]: I0318 08:48:31.294475 6976 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 08:48:31.299010 master-0 kubenswrapper[6976]: I0318 08:48:31.298965 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:48:32.167507 master-0 kubenswrapper[6976]: I0318 08:48:32.167425 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:48:32.715947 master-0 kubenswrapper[6976]: I0318 08:48:32.715538 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-sbsqg" event={"ID":"c6176328-5931-405b-8519-8e4bc83bedfb","Type":"ContainerStarted","Data":"b287d6ec0f6410df210ef106e799ff2b43424c5b6aae9af4b1b8b69e08405d19"} Mar 18 08:48:32.715947 master-0 kubenswrapper[6976]: I0318 08:48:32.715950 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-sbsqg" event={"ID":"c6176328-5931-405b-8519-8e4bc83bedfb","Type":"ContainerStarted","Data":"9251f652e343cc31746e68b12576e3f1ee195326d86390558137250a9af4552b"} Mar 18 08:48:33.722625 master-0 kubenswrapper[6976]: I0318 08:48:33.722538 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" event={"ID":"5f827195-f68d-4bd2-865b-a1f041a5c73e","Type":"ContainerStarted","Data":"94a4ad92cd3b53ae4641e35e7fd4ec8fccd8630c21c0fc3c12a574e02645e3da"} Mar 18 08:48:33.743237 master-0 kubenswrapper[6976]: I0318 08:48:33.743128 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-8487694857-sbsqg" podStartSLOduration=3.323825898 podStartE2EDuration="4.743101198s" podCreationTimestamp="2026-03-18 08:48:29 +0000 UTC" firstStartedPulling="2026-03-18 08:48:30.203803745 +0000 UTC m=+9.789405340" lastFinishedPulling="2026-03-18 08:48:31.623079045 +0000 UTC m=+11.208680640" observedRunningTime="2026-03-18 08:48:32.737838348 +0000 UTC m=+12.323439963" watchObservedRunningTime="2026-03-18 08:48:33.743101198 +0000 UTC m=+13.328702813" Mar 18 08:48:35.730308 master-0 kubenswrapper[6976]: I0318 08:48:35.730221 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" event={"ID":"81eefe1b-f683-4740-8fb0-0a5050f9b4a4","Type":"ContainerStarted","Data":"b07a3a34e91709be9071f795c0e0650539cb11f6bc35fb3bec049b4bc3051c6c"} Mar 18 08:48:35.731480 master-0 kubenswrapper[6976]: I0318 08:48:35.731427 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" event={"ID":"0f9ba06c-7a6b-4f46-a747-80b0a0b58600","Type":"ContainerStarted","Data":"e101758dad1868c5a7ecd290b1cfffd6e710b7c13cfdccb7b41fe00e23534e6d"} Mar 18 08:48:36.736160 master-0 kubenswrapper[6976]: I0318 08:48:36.735841 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" event={"ID":"0f6a7f55-84bd-4ea5-8248-4cb565904c3b","Type":"ContainerStarted","Data":"66cbf701fabf0e0f193e14614de147bfd5b674f1f5978178edd97cd8b89c12a4"} Mar 18 08:48:37.382001 master-0 kubenswrapper[6976]: I0318 08:48:37.381927 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:37.382001 master-0 kubenswrapper[6976]: I0318 08:48:37.381979 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:37.382001 master-0 kubenswrapper[6976]: I0318 08:48:37.382017 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:37.382287 master-0 kubenswrapper[6976]: I0318 08:48:37.382126 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:37.382287 master-0 kubenswrapper[6976]: E0318 08:48:37.382139 6976 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 08:48:37.382287 master-0 kubenswrapper[6976]: E0318 08:48:37.382230 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics podName:ca9d4694-8675-47c5-819f-89bba9dcdc0f nodeName:}" failed. No retries permitted until 2026-03-18 08:48:53.382211117 +0000 UTC m=+32.967812712 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-m862c" (UID: "ca9d4694-8675-47c5-819f-89bba9dcdc0f") : secret "marketplace-operator-metrics" not found Mar 18 08:48:37.382287 master-0 kubenswrapper[6976]: E0318 08:48:37.382234 6976 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 08:48:37.382287 master-0 kubenswrapper[6976]: E0318 08:48:37.382277 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert podName:2d0da6e3-3887-4361-8eae-e7447f9ff72c nodeName:}" failed. No retries permitted until 2026-03-18 08:48:53.382264898 +0000 UTC m=+32.967866493 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-k6xp5" (UID: "2d0da6e3-3887-4361-8eae-e7447f9ff72c") : secret "package-server-manager-serving-cert" not found Mar 18 08:48:37.382433 master-0 kubenswrapper[6976]: E0318 08:48:37.382311 6976 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:37.382433 master-0 kubenswrapper[6976]: E0318 08:48:37.382329 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls podName:bf7a3329-a04c-4b58-9364-b907c00cbe08 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:53.382323069 +0000 UTC m=+32.967924664 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls") pod "ingress-operator-66b84d69b-4cxfh" (UID: "bf7a3329-a04c-4b58-9364-b907c00cbe08") : secret "metrics-tls" not found Mar 18 08:48:37.382433 master-0 kubenswrapper[6976]: E0318 08:48:37.382357 6976 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 18 08:48:37.382433 master-0 kubenswrapper[6976]: E0318 08:48:37.382373 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls podName:1deb139f-1903-417e-835c-28abdd156cdb nodeName:}" failed. No retries permitted until 2026-03-18 08:48:53.3823684 +0000 UTC m=+32.967969995 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-9s8lp" (UID: "1deb139f-1903-417e-835c-28abdd156cdb") : secret "node-tuning-operator-tls" not found Mar 18 08:48:37.382433 master-0 kubenswrapper[6976]: E0318 08:48:37.382399 6976 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:37.382433 master-0 kubenswrapper[6976]: E0318 08:48:37.382414 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert podName:1deb139f-1903-417e-835c-28abdd156cdb nodeName:}" failed. No retries permitted until 2026-03-18 08:48:53.382409321 +0000 UTC m=+32.968010916 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-9s8lp" (UID: "1deb139f-1903-417e-835c-28abdd156cdb") : secret "performance-addon-operator-webhook-cert" not found Mar 18 08:48:37.382433 master-0 kubenswrapper[6976]: I0318 08:48:37.382156 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-k6xp5\" (UID: \"2d0da6e3-3887-4361-8eae-e7447f9ff72c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:48:37.382433 master-0 kubenswrapper[6976]: I0318 08:48:37.382435 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert\") pod \"olm-operator-5c9796789-twp27\" (UID: \"c00ee838-424f-482b-942f-08f0952a5ccd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:48:37.382697 master-0 kubenswrapper[6976]: I0318 08:48:37.382455 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:37.382697 master-0 kubenswrapper[6976]: E0318 08:48:37.382590 6976 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 18 08:48:37.382697 master-0 kubenswrapper[6976]: E0318 08:48:37.382619 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls podName:6c56e1ac-8752-4e46-8692-93716087f0e0 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:53.382612576 +0000 UTC m=+32.968214171 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-c4lgf" (UID: "6c56e1ac-8752-4e46-8692-93716087f0e0") : secret "image-registry-operator-tls" not found Mar 18 08:48:37.382697 master-0 kubenswrapper[6976]: E0318 08:48:37.382658 6976 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 08:48:37.382801 master-0 kubenswrapper[6976]: E0318 08:48:37.382700 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert podName:c00ee838-424f-482b-942f-08f0952a5ccd nodeName:}" failed. No retries permitted until 2026-03-18 08:48:53.382689357 +0000 UTC m=+32.968291042 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert") pod "olm-operator-5c9796789-twp27" (UID: "c00ee838-424f-482b-942f-08f0952a5ccd") : secret "olm-operator-serving-cert" not found Mar 18 08:48:37.483135 master-0 kubenswrapper[6976]: I0318 08:48:37.483053 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:37.483314 master-0 kubenswrapper[6976]: I0318 08:48:37.483230 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:48:37.483314 master-0 kubenswrapper[6976]: E0318 08:48:37.483249 6976 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:37.483514 master-0 kubenswrapper[6976]: I0318 08:48:37.483260 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert\") pod \"catalog-operator-68f85b4d6c-hhn7l\" (UID: \"f6833a48-fccb-42bd-ac90-29f08d5bf7e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:48:37.483514 master-0 kubenswrapper[6976]: E0318 08:48:37.483345 6976 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 08:48:37.483514 master-0 kubenswrapper[6976]: E0318 08:48:37.483360 6976 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 08:48:37.484063 master-0 kubenswrapper[6976]: E0318 08:48:37.483557 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls podName:09269324-c908-474d-818f-5cd49406f1e2 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:53.483433713 +0000 UTC m=+33.069035378 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-8vfjr" (UID: "09269324-c908-474d-818f-5cd49406f1e2") : secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:37.484129 master-0 kubenswrapper[6976]: E0318 08:48:37.484078 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert podName:f6833a48-fccb-42bd-ac90-29f08d5bf7e8 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:53.484063567 +0000 UTC m=+33.069665222 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert") pod "catalog-operator-68f85b4d6c-hhn7l" (UID: "f6833a48-fccb-42bd-ac90-29f08d5bf7e8") : secret "catalog-operator-serving-cert" not found Mar 18 08:48:37.484213 master-0 kubenswrapper[6976]: E0318 08:48:37.484194 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs podName:e48101ca-f356-45e3-93d7-4e17b8d8066c nodeName:}" failed. No retries permitted until 2026-03-18 08:48:53.48418239 +0000 UTC m=+33.069783985 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs") pod "network-metrics-daemon-2xs9n" (UID: "e48101ca-f356-45e3-93d7-4e17b8d8066c") : secret "metrics-daemon-secret" not found Mar 18 08:48:37.484302 master-0 kubenswrapper[6976]: I0318 08:48:37.484277 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-25rbq\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:48:37.484390 master-0 kubenswrapper[6976]: E0318 08:48:37.484369 6976 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 08:48:37.484446 master-0 kubenswrapper[6976]: E0318 08:48:37.484410 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs podName:7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac nodeName:}" failed. No retries permitted until 2026-03-18 08:48:53.484400695 +0000 UTC m=+33.070002290 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-25rbq" (UID: "7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac") : secret "multus-admission-controller-secret" not found Mar 18 08:48:37.484496 master-0 kubenswrapper[6976]: I0318 08:48:37.484443 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:48:37.484528 master-0 kubenswrapper[6976]: I0318 08:48:37.484498 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls\") pod \"dns-operator-9c5679d8f-2649q\" (UID: \"4192ea44-a38c-4b70-93c3-8070da2ffe2f\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 08:48:37.484647 master-0 kubenswrapper[6976]: E0318 08:48:37.484631 6976 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 18 08:48:37.484702 master-0 kubenswrapper[6976]: E0318 08:48:37.484641 6976 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 18 08:48:37.484702 master-0 kubenswrapper[6976]: E0318 08:48:37.484671 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls podName:4192ea44-a38c-4b70-93c3-8070da2ffe2f nodeName:}" failed. No retries permitted until 2026-03-18 08:48:53.484661991 +0000 UTC m=+33.070263596 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls") pod "dns-operator-9c5679d8f-2649q" (UID: "4192ea44-a38c-4b70-93c3-8070da2ffe2f") : secret "metrics-tls" not found Mar 18 08:48:37.484772 master-0 kubenswrapper[6976]: E0318 08:48:37.484702 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert podName:85d361a2-3f83-4857-b96e-3e98fcf33463 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:53.484679451 +0000 UTC m=+33.070281166 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert") pod "cluster-version-operator-56d8475767-t9zrr" (UID: "85d361a2-3f83-4857-b96e-3e98fcf33463") : secret "cluster-version-operator-serving-cert" not found Mar 18 08:48:38.350735 master-0 kubenswrapper[6976]: I0318 08:48:38.350698 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-d7dq8"] Mar 18 08:48:38.352012 master-0 kubenswrapper[6976]: I0318 08:48:38.351992 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:38.354674 master-0 kubenswrapper[6976]: I0318 08:48:38.354120 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 08:48:38.354674 master-0 kubenswrapper[6976]: I0318 08:48:38.354314 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 08:48:38.354674 master-0 kubenswrapper[6976]: I0318 08:48:38.354444 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 08:48:38.354674 master-0 kubenswrapper[6976]: I0318 08:48:38.354647 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 08:48:38.354997 master-0 kubenswrapper[6976]: I0318 08:48:38.354863 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 08:48:38.355103 master-0 kubenswrapper[6976]: I0318 08:48:38.355058 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 08:48:38.367669 master-0 kubenswrapper[6976]: I0318 08:48:38.367627 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-d7dq8"] Mar 18 08:48:38.496118 master-0 kubenswrapper[6976]: I0318 08:48:38.496018 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-d7dq8\" (UID: \"f86f47a3-eccd-46da-b966-608cffdc4e6d\") " pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:38.496118 master-0 kubenswrapper[6976]: I0318 08:48:38.496083 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcbg8\" (UniqueName: \"kubernetes.io/projected/f86f47a3-eccd-46da-b966-608cffdc4e6d-kube-api-access-vcbg8\") pod \"controller-manager-f5df8899c-d7dq8\" (UID: \"f86f47a3-eccd-46da-b966-608cffdc4e6d\") " pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:38.497060 master-0 kubenswrapper[6976]: I0318 08:48:38.496156 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-config\") pod \"controller-manager-f5df8899c-d7dq8\" (UID: \"f86f47a3-eccd-46da-b966-608cffdc4e6d\") " pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:38.497060 master-0 kubenswrapper[6976]: I0318 08:48:38.496241 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f86f47a3-eccd-46da-b966-608cffdc4e6d-serving-cert\") pod \"controller-manager-f5df8899c-d7dq8\" (UID: \"f86f47a3-eccd-46da-b966-608cffdc4e6d\") " pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:38.497060 master-0 kubenswrapper[6976]: I0318 08:48:38.496294 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-client-ca\") pod \"controller-manager-f5df8899c-d7dq8\" (UID: \"f86f47a3-eccd-46da-b966-608cffdc4e6d\") " pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:38.597442 master-0 kubenswrapper[6976]: I0318 08:48:38.597378 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-d7dq8\" (UID: \"f86f47a3-eccd-46da-b966-608cffdc4e6d\") " pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:38.597442 master-0 kubenswrapper[6976]: I0318 08:48:38.597431 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcbg8\" (UniqueName: \"kubernetes.io/projected/f86f47a3-eccd-46da-b966-608cffdc4e6d-kube-api-access-vcbg8\") pod \"controller-manager-f5df8899c-d7dq8\" (UID: \"f86f47a3-eccd-46da-b966-608cffdc4e6d\") " pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:38.597705 master-0 kubenswrapper[6976]: I0318 08:48:38.597460 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-config\") pod \"controller-manager-f5df8899c-d7dq8\" (UID: \"f86f47a3-eccd-46da-b966-608cffdc4e6d\") " pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:38.597705 master-0 kubenswrapper[6976]: I0318 08:48:38.597485 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f86f47a3-eccd-46da-b966-608cffdc4e6d-serving-cert\") pod \"controller-manager-f5df8899c-d7dq8\" (UID: \"f86f47a3-eccd-46da-b966-608cffdc4e6d\") " pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:38.597705 master-0 kubenswrapper[6976]: I0318 08:48:38.597516 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-client-ca\") pod \"controller-manager-f5df8899c-d7dq8\" (UID: \"f86f47a3-eccd-46da-b966-608cffdc4e6d\") " pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:38.597705 master-0 kubenswrapper[6976]: E0318 08:48:38.597657 6976 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:48:38.597705 master-0 kubenswrapper[6976]: E0318 08:48:38.597711 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-client-ca podName:f86f47a3-eccd-46da-b966-608cffdc4e6d nodeName:}" failed. No retries permitted until 2026-03-18 08:48:39.097697384 +0000 UTC m=+18.683298979 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-client-ca") pod "controller-manager-f5df8899c-d7dq8" (UID: "f86f47a3-eccd-46da-b966-608cffdc4e6d") : configmap "client-ca" not found Mar 18 08:48:38.597951 master-0 kubenswrapper[6976]: E0318 08:48:38.597926 6976 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 18 08:48:38.597993 master-0 kubenswrapper[6976]: E0318 08:48:38.597954 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-proxy-ca-bundles podName:f86f47a3-eccd-46da-b966-608cffdc4e6d nodeName:}" failed. No retries permitted until 2026-03-18 08:48:39.09794713 +0000 UTC m=+18.683548725 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-proxy-ca-bundles") pod "controller-manager-f5df8899c-d7dq8" (UID: "f86f47a3-eccd-46da-b966-608cffdc4e6d") : configmap "openshift-global-ca" not found Mar 18 08:48:38.598186 master-0 kubenswrapper[6976]: E0318 08:48:38.598161 6976 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 18 08:48:38.598219 master-0 kubenswrapper[6976]: E0318 08:48:38.598190 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-config podName:f86f47a3-eccd-46da-b966-608cffdc4e6d nodeName:}" failed. No retries permitted until 2026-03-18 08:48:39.098182995 +0000 UTC m=+18.683784590 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-config") pod "controller-manager-f5df8899c-d7dq8" (UID: "f86f47a3-eccd-46da-b966-608cffdc4e6d") : configmap "config" not found Mar 18 08:48:38.598258 master-0 kubenswrapper[6976]: E0318 08:48:38.598226 6976 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:48:38.598258 master-0 kubenswrapper[6976]: E0318 08:48:38.598244 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f86f47a3-eccd-46da-b966-608cffdc4e6d-serving-cert podName:f86f47a3-eccd-46da-b966-608cffdc4e6d nodeName:}" failed. No retries permitted until 2026-03-18 08:48:39.098239426 +0000 UTC m=+18.683841021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f86f47a3-eccd-46da-b966-608cffdc4e6d-serving-cert") pod "controller-manager-f5df8899c-d7dq8" (UID: "f86f47a3-eccd-46da-b966-608cffdc4e6d") : secret "serving-cert" not found Mar 18 08:48:38.743936 master-0 kubenswrapper[6976]: I0318 08:48:38.743884 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" event={"ID":"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd","Type":"ContainerStarted","Data":"35bec5aad4d31f588044876420b3abf5aa56e6a349124b911e43ef3a01a96e33"} Mar 18 08:48:39.102986 master-0 kubenswrapper[6976]: I0318 08:48:39.102831 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-config\") pod \"controller-manager-f5df8899c-d7dq8\" (UID: \"f86f47a3-eccd-46da-b966-608cffdc4e6d\") " pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:39.102986 master-0 kubenswrapper[6976]: I0318 08:48:39.102900 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f86f47a3-eccd-46da-b966-608cffdc4e6d-serving-cert\") pod \"controller-manager-f5df8899c-d7dq8\" (UID: \"f86f47a3-eccd-46da-b966-608cffdc4e6d\") " pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:39.102986 master-0 kubenswrapper[6976]: E0318 08:48:39.102976 6976 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 18 08:48:39.103259 master-0 kubenswrapper[6976]: E0318 08:48:39.103044 6976 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:48:39.103259 master-0 kubenswrapper[6976]: I0318 08:48:39.103052 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-client-ca\") pod \"controller-manager-f5df8899c-d7dq8\" (UID: \"f86f47a3-eccd-46da-b966-608cffdc4e6d\") " pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:39.103259 master-0 kubenswrapper[6976]: E0318 08:48:39.103068 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-config podName:f86f47a3-eccd-46da-b966-608cffdc4e6d nodeName:}" failed. No retries permitted until 2026-03-18 08:48:40.10304936 +0000 UTC m=+19.688651065 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-config") pod "controller-manager-f5df8899c-d7dq8" (UID: "f86f47a3-eccd-46da-b966-608cffdc4e6d") : configmap "config" not found Mar 18 08:48:39.103259 master-0 kubenswrapper[6976]: E0318 08:48:39.103102 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f86f47a3-eccd-46da-b966-608cffdc4e6d-serving-cert podName:f86f47a3-eccd-46da-b966-608cffdc4e6d nodeName:}" failed. No retries permitted until 2026-03-18 08:48:40.103087621 +0000 UTC m=+19.688689216 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f86f47a3-eccd-46da-b966-608cffdc4e6d-serving-cert") pod "controller-manager-f5df8899c-d7dq8" (UID: "f86f47a3-eccd-46da-b966-608cffdc4e6d") : secret "serving-cert" not found Mar 18 08:48:39.103259 master-0 kubenswrapper[6976]: E0318 08:48:39.103134 6976 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:48:39.103259 master-0 kubenswrapper[6976]: E0318 08:48:39.103165 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-client-ca podName:f86f47a3-eccd-46da-b966-608cffdc4e6d nodeName:}" failed. No retries permitted until 2026-03-18 08:48:40.103155533 +0000 UTC m=+19.688757228 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-client-ca") pod "controller-manager-f5df8899c-d7dq8" (UID: "f86f47a3-eccd-46da-b966-608cffdc4e6d") : configmap "client-ca" not found Mar 18 08:48:39.103259 master-0 kubenswrapper[6976]: I0318 08:48:39.103204 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-d7dq8\" (UID: \"f86f47a3-eccd-46da-b966-608cffdc4e6d\") " pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:39.103259 master-0 kubenswrapper[6976]: E0318 08:48:39.103254 6976 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 18 08:48:39.103477 master-0 kubenswrapper[6976]: E0318 08:48:39.103275 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-proxy-ca-bundles podName:f86f47a3-eccd-46da-b966-608cffdc4e6d nodeName:}" failed. No retries permitted until 2026-03-18 08:48:40.103267995 +0000 UTC m=+19.688869700 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-proxy-ca-bundles") pod "controller-manager-f5df8899c-d7dq8" (UID: "f86f47a3-eccd-46da-b966-608cffdc4e6d") : configmap "openshift-global-ca" not found Mar 18 08:48:39.511249 master-0 kubenswrapper[6976]: I0318 08:48:39.509755 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcbg8\" (UniqueName: \"kubernetes.io/projected/f86f47a3-eccd-46da-b966-608cffdc4e6d-kube-api-access-vcbg8\") pod \"controller-manager-f5df8899c-d7dq8\" (UID: \"f86f47a3-eccd-46da-b966-608cffdc4e6d\") " pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:39.659449 master-0 kubenswrapper[6976]: I0318 08:48:39.659394 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-d7dq8"] Mar 18 08:48:39.659657 master-0 kubenswrapper[6976]: E0318 08:48:39.659629 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" podUID="f86f47a3-eccd-46da-b966-608cffdc4e6d" Mar 18 08:48:39.669469 master-0 kubenswrapper[6976]: I0318 08:48:39.669432 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6"] Mar 18 08:48:39.670000 master-0 kubenswrapper[6976]: I0318 08:48:39.669978 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" Mar 18 08:48:39.671602 master-0 kubenswrapper[6976]: I0318 08:48:39.671554 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 08:48:39.671701 master-0 kubenswrapper[6976]: I0318 08:48:39.671604 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 08:48:39.671859 master-0 kubenswrapper[6976]: I0318 08:48:39.671809 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 08:48:39.672370 master-0 kubenswrapper[6976]: I0318 08:48:39.672344 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 08:48:39.672572 master-0 kubenswrapper[6976]: I0318 08:48:39.672540 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 08:48:39.682457 master-0 kubenswrapper[6976]: I0318 08:48:39.681288 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6"] Mar 18 08:48:39.746202 master-0 kubenswrapper[6976]: I0318 08:48:39.745922 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:39.750837 master-0 kubenswrapper[6976]: I0318 08:48:39.750809 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:39.816541 master-0 kubenswrapper[6976]: I0318 08:48:39.816357 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvpt6\" (UniqueName: \"kubernetes.io/projected/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-kube-api-access-nvpt6\") pod \"route-controller-manager-689cb4b98f-llbf6\" (UID: \"042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" Mar 18 08:48:39.816821 master-0 kubenswrapper[6976]: I0318 08:48:39.816597 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-config\") pod \"route-controller-manager-689cb4b98f-llbf6\" (UID: \"042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" Mar 18 08:48:39.816821 master-0 kubenswrapper[6976]: I0318 08:48:39.816687 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-serving-cert\") pod \"route-controller-manager-689cb4b98f-llbf6\" (UID: \"042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" Mar 18 08:48:39.816821 master-0 kubenswrapper[6976]: I0318 08:48:39.816803 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-client-ca\") pod \"route-controller-manager-689cb4b98f-llbf6\" (UID: \"042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" Mar 18 08:48:39.918601 master-0 kubenswrapper[6976]: I0318 08:48:39.917922 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcbg8\" (UniqueName: \"kubernetes.io/projected/f86f47a3-eccd-46da-b966-608cffdc4e6d-kube-api-access-vcbg8\") pod \"f86f47a3-eccd-46da-b966-608cffdc4e6d\" (UID: \"f86f47a3-eccd-46da-b966-608cffdc4e6d\") " Mar 18 08:48:39.918601 master-0 kubenswrapper[6976]: I0318 08:48:39.918153 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvpt6\" (UniqueName: \"kubernetes.io/projected/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-kube-api-access-nvpt6\") pod \"route-controller-manager-689cb4b98f-llbf6\" (UID: \"042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" Mar 18 08:48:39.918601 master-0 kubenswrapper[6976]: I0318 08:48:39.918445 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-config\") pod \"route-controller-manager-689cb4b98f-llbf6\" (UID: \"042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" Mar 18 08:48:39.918883 master-0 kubenswrapper[6976]: I0318 08:48:39.918780 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-serving-cert\") pod \"route-controller-manager-689cb4b98f-llbf6\" (UID: \"042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" Mar 18 08:48:39.918883 master-0 kubenswrapper[6976]: I0318 08:48:39.918856 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-client-ca\") pod \"route-controller-manager-689cb4b98f-llbf6\" (UID: \"042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" Mar 18 08:48:39.919005 master-0 kubenswrapper[6976]: E0318 08:48:39.918970 6976 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:48:39.919098 master-0 kubenswrapper[6976]: E0318 08:48:39.919046 6976 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:48:39.919202 master-0 kubenswrapper[6976]: E0318 08:48:39.919080 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-client-ca podName:042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:40.419062633 +0000 UTC m=+20.004664218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-client-ca") pod "route-controller-manager-689cb4b98f-llbf6" (UID: "042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7") : configmap "client-ca" not found Mar 18 08:48:39.919202 master-0 kubenswrapper[6976]: E0318 08:48:39.919199 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-serving-cert podName:042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:40.419169435 +0000 UTC m=+20.004771110 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-serving-cert") pod "route-controller-manager-689cb4b98f-llbf6" (UID: "042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7") : secret "serving-cert" not found Mar 18 08:48:39.919539 master-0 kubenswrapper[6976]: I0318 08:48:39.919495 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-config\") pod \"route-controller-manager-689cb4b98f-llbf6\" (UID: \"042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" Mar 18 08:48:39.921075 master-0 kubenswrapper[6976]: I0318 08:48:39.920988 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f86f47a3-eccd-46da-b966-608cffdc4e6d-kube-api-access-vcbg8" (OuterVolumeSpecName: "kube-api-access-vcbg8") pod "f86f47a3-eccd-46da-b966-608cffdc4e6d" (UID: "f86f47a3-eccd-46da-b966-608cffdc4e6d"). InnerVolumeSpecName "kube-api-access-vcbg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:48:39.946655 master-0 kubenswrapper[6976]: I0318 08:48:39.946606 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvpt6\" (UniqueName: \"kubernetes.io/projected/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-kube-api-access-nvpt6\") pod \"route-controller-manager-689cb4b98f-llbf6\" (UID: \"042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" Mar 18 08:48:40.020753 master-0 kubenswrapper[6976]: I0318 08:48:40.020678 6976 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcbg8\" (UniqueName: \"kubernetes.io/projected/f86f47a3-eccd-46da-b966-608cffdc4e6d-kube-api-access-vcbg8\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:40.121792 master-0 kubenswrapper[6976]: I0318 08:48:40.121616 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-d7dq8\" (UID: \"f86f47a3-eccd-46da-b966-608cffdc4e6d\") " pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:40.121792 master-0 kubenswrapper[6976]: I0318 08:48:40.121745 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-config\") pod \"controller-manager-f5df8899c-d7dq8\" (UID: \"f86f47a3-eccd-46da-b966-608cffdc4e6d\") " pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:40.122088 master-0 kubenswrapper[6976]: I0318 08:48:40.121843 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f86f47a3-eccd-46da-b966-608cffdc4e6d-serving-cert\") pod \"controller-manager-f5df8899c-d7dq8\" (UID: \"f86f47a3-eccd-46da-b966-608cffdc4e6d\") " pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:40.122088 master-0 kubenswrapper[6976]: I0318 08:48:40.121953 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-client-ca\") pod \"controller-manager-f5df8899c-d7dq8\" (UID: \"f86f47a3-eccd-46da-b966-608cffdc4e6d\") " pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:40.122273 master-0 kubenswrapper[6976]: E0318 08:48:40.122238 6976 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:48:40.122363 master-0 kubenswrapper[6976]: E0318 08:48:40.122339 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-client-ca podName:f86f47a3-eccd-46da-b966-608cffdc4e6d nodeName:}" failed. No retries permitted until 2026-03-18 08:48:42.122306024 +0000 UTC m=+21.707907679 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-client-ca") pod "controller-manager-f5df8899c-d7dq8" (UID: "f86f47a3-eccd-46da-b966-608cffdc4e6d") : configmap "client-ca" not found Mar 18 08:48:40.124015 master-0 kubenswrapper[6976]: E0318 08:48:40.122744 6976 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:48:40.124015 master-0 kubenswrapper[6976]: E0318 08:48:40.122865 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f86f47a3-eccd-46da-b966-608cffdc4e6d-serving-cert podName:f86f47a3-eccd-46da-b966-608cffdc4e6d nodeName:}" failed. No retries permitted until 2026-03-18 08:48:42.122836866 +0000 UTC m=+21.708438481 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f86f47a3-eccd-46da-b966-608cffdc4e6d-serving-cert") pod "controller-manager-f5df8899c-d7dq8" (UID: "f86f47a3-eccd-46da-b966-608cffdc4e6d") : secret "serving-cert" not found Mar 18 08:48:40.124015 master-0 kubenswrapper[6976]: I0318 08:48:40.123950 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-config\") pod \"controller-manager-f5df8899c-d7dq8\" (UID: \"f86f47a3-eccd-46da-b966-608cffdc4e6d\") " pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:40.125137 master-0 kubenswrapper[6976]: I0318 08:48:40.125083 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-d7dq8\" (UID: \"f86f47a3-eccd-46da-b966-608cffdc4e6d\") " pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:40.223087 master-0 kubenswrapper[6976]: I0318 08:48:40.223036 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-proxy-ca-bundles\") pod \"f86f47a3-eccd-46da-b966-608cffdc4e6d\" (UID: \"f86f47a3-eccd-46da-b966-608cffdc4e6d\") " Mar 18 08:48:40.223423 master-0 kubenswrapper[6976]: I0318 08:48:40.223397 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-config\") pod \"f86f47a3-eccd-46da-b966-608cffdc4e6d\" (UID: \"f86f47a3-eccd-46da-b966-608cffdc4e6d\") " Mar 18 08:48:40.223718 master-0 kubenswrapper[6976]: I0318 08:48:40.223645 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f86f47a3-eccd-46da-b966-608cffdc4e6d" (UID: "f86f47a3-eccd-46da-b966-608cffdc4e6d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:48:40.224185 master-0 kubenswrapper[6976]: I0318 08:48:40.224156 6976 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:40.224676 master-0 kubenswrapper[6976]: I0318 08:48:40.224613 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-config" (OuterVolumeSpecName: "config") pod "f86f47a3-eccd-46da-b966-608cffdc4e6d" (UID: "f86f47a3-eccd-46da-b966-608cffdc4e6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:48:40.325862 master-0 kubenswrapper[6976]: I0318 08:48:40.325731 6976 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:40.427019 master-0 kubenswrapper[6976]: I0318 08:48:40.426875 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-serving-cert\") pod \"route-controller-manager-689cb4b98f-llbf6\" (UID: \"042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" Mar 18 08:48:40.427268 master-0 kubenswrapper[6976]: I0318 08:48:40.427240 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-client-ca\") pod \"route-controller-manager-689cb4b98f-llbf6\" (UID: \"042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" Mar 18 08:48:40.427359 master-0 kubenswrapper[6976]: E0318 08:48:40.427124 6976 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:48:40.427491 master-0 kubenswrapper[6976]: E0318 08:48:40.427478 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-serving-cert podName:042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:41.427447887 +0000 UTC m=+21.013049482 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-serving-cert") pod "route-controller-manager-689cb4b98f-llbf6" (UID: "042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7") : secret "serving-cert" not found Mar 18 08:48:40.427556 master-0 kubenswrapper[6976]: E0318 08:48:40.427518 6976 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:48:40.427684 master-0 kubenswrapper[6976]: E0318 08:48:40.427670 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-client-ca podName:042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:41.427657272 +0000 UTC m=+21.013258957 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-client-ca") pod "route-controller-manager-689cb4b98f-llbf6" (UID: "042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7") : configmap "client-ca" not found Mar 18 08:48:40.748708 master-0 kubenswrapper[6976]: I0318 08:48:40.748646 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-d7dq8" Mar 18 08:48:40.791622 master-0 kubenswrapper[6976]: I0318 08:48:40.789267 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-85b8696b7d-lbmqn"] Mar 18 08:48:40.791622 master-0 kubenswrapper[6976]: I0318 08:48:40.790333 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-d7dq8"] Mar 18 08:48:40.791622 master-0 kubenswrapper[6976]: I0318 08:48:40.790457 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85b8696b7d-lbmqn" Mar 18 08:48:40.798037 master-0 kubenswrapper[6976]: I0318 08:48:40.792799 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 08:48:40.798037 master-0 kubenswrapper[6976]: I0318 08:48:40.792896 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 08:48:40.798037 master-0 kubenswrapper[6976]: I0318 08:48:40.792935 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 08:48:40.798037 master-0 kubenswrapper[6976]: I0318 08:48:40.792798 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 08:48:40.798037 master-0 kubenswrapper[6976]: I0318 08:48:40.794545 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 08:48:40.798037 master-0 kubenswrapper[6976]: I0318 08:48:40.796666 6976 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-d7dq8"] Mar 18 08:48:40.804801 master-0 kubenswrapper[6976]: I0318 08:48:40.803942 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-85b8696b7d-lbmqn"] Mar 18 08:48:40.807610 master-0 kubenswrapper[6976]: I0318 08:48:40.806182 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 08:48:40.933123 master-0 kubenswrapper[6976]: I0318 08:48:40.933065 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gng87\" (UniqueName: \"kubernetes.io/projected/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-kube-api-access-gng87\") pod \"controller-manager-85b8696b7d-lbmqn\" (UID: \"d1d9ae5d-057f-45c0-8aec-14c42bdec2c8\") " pod="openshift-controller-manager/controller-manager-85b8696b7d-lbmqn" Mar 18 08:48:40.933123 master-0 kubenswrapper[6976]: I0318 08:48:40.933127 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-serving-cert\") pod \"controller-manager-85b8696b7d-lbmqn\" (UID: \"d1d9ae5d-057f-45c0-8aec-14c42bdec2c8\") " pod="openshift-controller-manager/controller-manager-85b8696b7d-lbmqn" Mar 18 08:48:40.933338 master-0 kubenswrapper[6976]: I0318 08:48:40.933151 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-client-ca\") pod \"controller-manager-85b8696b7d-lbmqn\" (UID: \"d1d9ae5d-057f-45c0-8aec-14c42bdec2c8\") " pod="openshift-controller-manager/controller-manager-85b8696b7d-lbmqn" Mar 18 08:48:40.933338 master-0 kubenswrapper[6976]: I0318 08:48:40.933200 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-proxy-ca-bundles\") pod \"controller-manager-85b8696b7d-lbmqn\" (UID: \"d1d9ae5d-057f-45c0-8aec-14c42bdec2c8\") " pod="openshift-controller-manager/controller-manager-85b8696b7d-lbmqn" Mar 18 08:48:40.933338 master-0 kubenswrapper[6976]: I0318 08:48:40.933241 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-config\") pod \"controller-manager-85b8696b7d-lbmqn\" (UID: \"d1d9ae5d-057f-45c0-8aec-14c42bdec2c8\") " pod="openshift-controller-manager/controller-manager-85b8696b7d-lbmqn" Mar 18 08:48:40.933338 master-0 kubenswrapper[6976]: I0318 08:48:40.933284 6976 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f86f47a3-eccd-46da-b966-608cffdc4e6d-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:40.933338 master-0 kubenswrapper[6976]: I0318 08:48:40.933298 6976 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f86f47a3-eccd-46da-b966-608cffdc4e6d-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:41.034721 master-0 kubenswrapper[6976]: I0318 08:48:41.034343 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-proxy-ca-bundles\") pod \"controller-manager-85b8696b7d-lbmqn\" (UID: \"d1d9ae5d-057f-45c0-8aec-14c42bdec2c8\") " pod="openshift-controller-manager/controller-manager-85b8696b7d-lbmqn" Mar 18 08:48:41.034721 master-0 kubenswrapper[6976]: I0318 08:48:41.034712 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-config\") pod \"controller-manager-85b8696b7d-lbmqn\" (UID: \"d1d9ae5d-057f-45c0-8aec-14c42bdec2c8\") " pod="openshift-controller-manager/controller-manager-85b8696b7d-lbmqn" Mar 18 08:48:41.035034 master-0 kubenswrapper[6976]: I0318 08:48:41.035013 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-serving-cert\") pod \"controller-manager-85b8696b7d-lbmqn\" (UID: \"d1d9ae5d-057f-45c0-8aec-14c42bdec2c8\") " pod="openshift-controller-manager/controller-manager-85b8696b7d-lbmqn" Mar 18 08:48:41.035073 master-0 kubenswrapper[6976]: I0318 08:48:41.035047 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gng87\" (UniqueName: \"kubernetes.io/projected/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-kube-api-access-gng87\") pod \"controller-manager-85b8696b7d-lbmqn\" (UID: \"d1d9ae5d-057f-45c0-8aec-14c42bdec2c8\") " pod="openshift-controller-manager/controller-manager-85b8696b7d-lbmqn" Mar 18 08:48:41.035102 master-0 kubenswrapper[6976]: I0318 08:48:41.035076 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-client-ca\") pod \"controller-manager-85b8696b7d-lbmqn\" (UID: \"d1d9ae5d-057f-45c0-8aec-14c42bdec2c8\") " pod="openshift-controller-manager/controller-manager-85b8696b7d-lbmqn" Mar 18 08:48:41.035179 master-0 kubenswrapper[6976]: E0318 08:48:41.035159 6976 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:48:41.035234 master-0 kubenswrapper[6976]: E0318 08:48:41.035203 6976 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:48:41.035290 master-0 kubenswrapper[6976]: E0318 08:48:41.035217 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-client-ca podName:d1d9ae5d-057f-45c0-8aec-14c42bdec2c8 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:41.535200947 +0000 UTC m=+21.120802542 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-client-ca") pod "controller-manager-85b8696b7d-lbmqn" (UID: "d1d9ae5d-057f-45c0-8aec-14c42bdec2c8") : configmap "client-ca" not found Mar 18 08:48:41.035333 master-0 kubenswrapper[6976]: E0318 08:48:41.035307 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-serving-cert podName:d1d9ae5d-057f-45c0-8aec-14c42bdec2c8 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:41.535290169 +0000 UTC m=+21.120891764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-serving-cert") pod "controller-manager-85b8696b7d-lbmqn" (UID: "d1d9ae5d-057f-45c0-8aec-14c42bdec2c8") : secret "serving-cert" not found Mar 18 08:48:41.035504 master-0 kubenswrapper[6976]: I0318 08:48:41.035478 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-config\") pod \"controller-manager-85b8696b7d-lbmqn\" (UID: \"d1d9ae5d-057f-45c0-8aec-14c42bdec2c8\") " pod="openshift-controller-manager/controller-manager-85b8696b7d-lbmqn" Mar 18 08:48:41.035700 master-0 kubenswrapper[6976]: I0318 08:48:41.035679 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-proxy-ca-bundles\") pod \"controller-manager-85b8696b7d-lbmqn\" (UID: \"d1d9ae5d-057f-45c0-8aec-14c42bdec2c8\") " pod="openshift-controller-manager/controller-manager-85b8696b7d-lbmqn" Mar 18 08:48:41.053090 master-0 kubenswrapper[6976]: I0318 08:48:41.053053 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gng87\" (UniqueName: \"kubernetes.io/projected/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-kube-api-access-gng87\") pod \"controller-manager-85b8696b7d-lbmqn\" (UID: \"d1d9ae5d-057f-45c0-8aec-14c42bdec2c8\") " pod="openshift-controller-manager/controller-manager-85b8696b7d-lbmqn" Mar 18 08:48:41.443708 master-0 kubenswrapper[6976]: I0318 08:48:41.441218 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-client-ca\") pod \"route-controller-manager-689cb4b98f-llbf6\" (UID: \"042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" Mar 18 08:48:41.443708 master-0 kubenswrapper[6976]: E0318 08:48:41.441378 6976 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:48:41.443708 master-0 kubenswrapper[6976]: I0318 08:48:41.442000 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-serving-cert\") pod \"route-controller-manager-689cb4b98f-llbf6\" (UID: \"042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" Mar 18 08:48:41.443708 master-0 kubenswrapper[6976]: E0318 08:48:41.442226 6976 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:48:41.443708 master-0 kubenswrapper[6976]: E0318 08:48:41.442287 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-serving-cert podName:042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:43.442266612 +0000 UTC m=+23.027868207 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-serving-cert") pod "route-controller-manager-689cb4b98f-llbf6" (UID: "042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7") : secret "serving-cert" not found Mar 18 08:48:41.443708 master-0 kubenswrapper[6976]: E0318 08:48:41.442309 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-client-ca podName:042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:43.442301113 +0000 UTC m=+23.027902708 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-client-ca") pod "route-controller-manager-689cb4b98f-llbf6" (UID: "042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7") : configmap "client-ca" not found Mar 18 08:48:41.479474 master-0 kubenswrapper[6976]: I0318 08:48:41.479414 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-85b8696b7d-lbmqn"] Mar 18 08:48:41.479931 master-0 kubenswrapper[6976]: E0318 08:48:41.479828 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-85b8696b7d-lbmqn" podUID="d1d9ae5d-057f-45c0-8aec-14c42bdec2c8" Mar 18 08:48:41.542743 master-0 kubenswrapper[6976]: I0318 08:48:41.542692 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-serving-cert\") pod \"controller-manager-85b8696b7d-lbmqn\" (UID: \"d1d9ae5d-057f-45c0-8aec-14c42bdec2c8\") " pod="openshift-controller-manager/controller-manager-85b8696b7d-lbmqn" Mar 18 08:48:41.542743 master-0 kubenswrapper[6976]: I0318 08:48:41.542737 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-client-ca\") pod \"controller-manager-85b8696b7d-lbmqn\" (UID: \"d1d9ae5d-057f-45c0-8aec-14c42bdec2c8\") " pod="openshift-controller-manager/controller-manager-85b8696b7d-lbmqn" Mar 18 08:48:41.542954 master-0 kubenswrapper[6976]: E0318 08:48:41.542906 6976 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:48:41.543058 master-0 kubenswrapper[6976]: E0318 08:48:41.543041 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-serving-cert podName:d1d9ae5d-057f-45c0-8aec-14c42bdec2c8 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:42.542954166 +0000 UTC m=+22.128555831 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-serving-cert") pod "controller-manager-85b8696b7d-lbmqn" (UID: "d1d9ae5d-057f-45c0-8aec-14c42bdec2c8") : secret "serving-cert" not found Mar 18 08:48:41.543116 master-0 kubenswrapper[6976]: E0318 08:48:41.543045 6976 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:48:41.543228 master-0 kubenswrapper[6976]: E0318 08:48:41.543209 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-client-ca podName:d1d9ae5d-057f-45c0-8aec-14c42bdec2c8 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:42.543186981 +0000 UTC m=+22.128788666 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-client-ca") pod "controller-manager-85b8696b7d-lbmqn" (UID: "d1d9ae5d-057f-45c0-8aec-14c42bdec2c8") : configmap "client-ca" not found Mar 18 08:48:41.751479 master-0 kubenswrapper[6976]: I0318 08:48:41.751427 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85b8696b7d-lbmqn" Mar 18 08:48:41.760185 master-0 kubenswrapper[6976]: I0318 08:48:41.760104 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85b8696b7d-lbmqn" Mar 18 08:48:41.847638 master-0 kubenswrapper[6976]: I0318 08:48:41.847216 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gng87\" (UniqueName: \"kubernetes.io/projected/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-kube-api-access-gng87\") pod \"d1d9ae5d-057f-45c0-8aec-14c42bdec2c8\" (UID: \"d1d9ae5d-057f-45c0-8aec-14c42bdec2c8\") " Mar 18 08:48:41.847638 master-0 kubenswrapper[6976]: I0318 08:48:41.847266 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-proxy-ca-bundles\") pod \"d1d9ae5d-057f-45c0-8aec-14c42bdec2c8\" (UID: \"d1d9ae5d-057f-45c0-8aec-14c42bdec2c8\") " Mar 18 08:48:41.847638 master-0 kubenswrapper[6976]: I0318 08:48:41.847307 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-config\") pod \"d1d9ae5d-057f-45c0-8aec-14c42bdec2c8\" (UID: \"d1d9ae5d-057f-45c0-8aec-14c42bdec2c8\") " Mar 18 08:48:41.848253 master-0 kubenswrapper[6976]: I0318 08:48:41.848186 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d1d9ae5d-057f-45c0-8aec-14c42bdec2c8" (UID: "d1d9ae5d-057f-45c0-8aec-14c42bdec2c8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:48:41.849392 master-0 kubenswrapper[6976]: I0318 08:48:41.849166 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-config" (OuterVolumeSpecName: "config") pod "d1d9ae5d-057f-45c0-8aec-14c42bdec2c8" (UID: "d1d9ae5d-057f-45c0-8aec-14c42bdec2c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:48:41.850978 master-0 kubenswrapper[6976]: I0318 08:48:41.850929 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-kube-api-access-gng87" (OuterVolumeSpecName: "kube-api-access-gng87") pod "d1d9ae5d-057f-45c0-8aec-14c42bdec2c8" (UID: "d1d9ae5d-057f-45c0-8aec-14c42bdec2c8"). InnerVolumeSpecName "kube-api-access-gng87". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:48:41.949064 master-0 kubenswrapper[6976]: I0318 08:48:41.949002 6976 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gng87\" (UniqueName: \"kubernetes.io/projected/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-kube-api-access-gng87\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:41.949064 master-0 kubenswrapper[6976]: I0318 08:48:41.949045 6976 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:41.949064 master-0 kubenswrapper[6976]: I0318 08:48:41.949055 6976 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:42.402109 master-0 kubenswrapper[6976]: I0318 08:48:42.402058 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-fhj95"] Mar 18 08:48:42.402584 master-0 kubenswrapper[6976]: I0318 08:48:42.402543 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-79bc6b8d76-fhj95" Mar 18 08:48:42.402698 master-0 kubenswrapper[6976]: I0318 08:48:42.402655 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-fhj95"] Mar 18 08:48:42.406629 master-0 kubenswrapper[6976]: I0318 08:48:42.406593 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 18 08:48:42.406830 master-0 kubenswrapper[6976]: I0318 08:48:42.406632 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 18 08:48:42.406878 master-0 kubenswrapper[6976]: I0318 08:48:42.406841 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 18 08:48:42.408608 master-0 kubenswrapper[6976]: I0318 08:48:42.408380 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 18 08:48:42.559627 master-0 kubenswrapper[6976]: I0318 08:48:42.558120 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkfms\" (UniqueName: \"kubernetes.io/projected/680006ef-a955-491e-b6a3-1ca7fcc20165-kube-api-access-kkfms\") pod \"service-ca-79bc6b8d76-fhj95\" (UID: \"680006ef-a955-491e-b6a3-1ca7fcc20165\") " pod="openshift-service-ca/service-ca-79bc6b8d76-fhj95" Mar 18 08:48:42.559627 master-0 kubenswrapper[6976]: I0318 08:48:42.558177 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/680006ef-a955-491e-b6a3-1ca7fcc20165-signing-key\") pod \"service-ca-79bc6b8d76-fhj95\" (UID: \"680006ef-a955-491e-b6a3-1ca7fcc20165\") " pod="openshift-service-ca/service-ca-79bc6b8d76-fhj95" Mar 18 08:48:42.559627 master-0 kubenswrapper[6976]: I0318 08:48:42.558238 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/680006ef-a955-491e-b6a3-1ca7fcc20165-signing-cabundle\") pod \"service-ca-79bc6b8d76-fhj95\" (UID: \"680006ef-a955-491e-b6a3-1ca7fcc20165\") " pod="openshift-service-ca/service-ca-79bc6b8d76-fhj95" Mar 18 08:48:42.559627 master-0 kubenswrapper[6976]: I0318 08:48:42.558263 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-serving-cert\") pod \"controller-manager-85b8696b7d-lbmqn\" (UID: \"d1d9ae5d-057f-45c0-8aec-14c42bdec2c8\") " pod="openshift-controller-manager/controller-manager-85b8696b7d-lbmqn" Mar 18 08:48:42.559627 master-0 kubenswrapper[6976]: I0318 08:48:42.558281 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-client-ca\") pod \"controller-manager-85b8696b7d-lbmqn\" (UID: \"d1d9ae5d-057f-45c0-8aec-14c42bdec2c8\") " pod="openshift-controller-manager/controller-manager-85b8696b7d-lbmqn" Mar 18 08:48:42.559627 master-0 kubenswrapper[6976]: E0318 08:48:42.558418 6976 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:48:42.559627 master-0 kubenswrapper[6976]: E0318 08:48:42.558471 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-client-ca podName:d1d9ae5d-057f-45c0-8aec-14c42bdec2c8 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:44.558453756 +0000 UTC m=+24.144055351 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-client-ca") pod "controller-manager-85b8696b7d-lbmqn" (UID: "d1d9ae5d-057f-45c0-8aec-14c42bdec2c8") : configmap "client-ca" not found Mar 18 08:48:42.559627 master-0 kubenswrapper[6976]: E0318 08:48:42.558546 6976 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:48:42.559627 master-0 kubenswrapper[6976]: E0318 08:48:42.558583 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-serving-cert podName:d1d9ae5d-057f-45c0-8aec-14c42bdec2c8 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:44.558559828 +0000 UTC m=+24.144161423 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-serving-cert") pod "controller-manager-85b8696b7d-lbmqn" (UID: "d1d9ae5d-057f-45c0-8aec-14c42bdec2c8") : secret "serving-cert" not found Mar 18 08:48:42.602776 master-0 kubenswrapper[6976]: I0318 08:48:42.602728 6976 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f86f47a3-eccd-46da-b966-608cffdc4e6d" path="/var/lib/kubelet/pods/f86f47a3-eccd-46da-b966-608cffdc4e6d/volumes" Mar 18 08:48:42.659128 master-0 kubenswrapper[6976]: I0318 08:48:42.659023 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/680006ef-a955-491e-b6a3-1ca7fcc20165-signing-cabundle\") pod \"service-ca-79bc6b8d76-fhj95\" (UID: \"680006ef-a955-491e-b6a3-1ca7fcc20165\") " pod="openshift-service-ca/service-ca-79bc6b8d76-fhj95" Mar 18 08:48:42.659611 master-0 kubenswrapper[6976]: I0318 08:48:42.659521 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkfms\" (UniqueName: \"kubernetes.io/projected/680006ef-a955-491e-b6a3-1ca7fcc20165-kube-api-access-kkfms\") pod \"service-ca-79bc6b8d76-fhj95\" (UID: \"680006ef-a955-491e-b6a3-1ca7fcc20165\") " pod="openshift-service-ca/service-ca-79bc6b8d76-fhj95" Mar 18 08:48:42.659909 master-0 kubenswrapper[6976]: I0318 08:48:42.659843 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/680006ef-a955-491e-b6a3-1ca7fcc20165-signing-key\") pod \"service-ca-79bc6b8d76-fhj95\" (UID: \"680006ef-a955-491e-b6a3-1ca7fcc20165\") " pod="openshift-service-ca/service-ca-79bc6b8d76-fhj95" Mar 18 08:48:42.662085 master-0 kubenswrapper[6976]: I0318 08:48:42.659893 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/680006ef-a955-491e-b6a3-1ca7fcc20165-signing-cabundle\") pod \"service-ca-79bc6b8d76-fhj95\" (UID: \"680006ef-a955-491e-b6a3-1ca7fcc20165\") " pod="openshift-service-ca/service-ca-79bc6b8d76-fhj95" Mar 18 08:48:42.663674 master-0 kubenswrapper[6976]: I0318 08:48:42.663645 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/680006ef-a955-491e-b6a3-1ca7fcc20165-signing-key\") pod \"service-ca-79bc6b8d76-fhj95\" (UID: \"680006ef-a955-491e-b6a3-1ca7fcc20165\") " pod="openshift-service-ca/service-ca-79bc6b8d76-fhj95" Mar 18 08:48:42.678181 master-0 kubenswrapper[6976]: I0318 08:48:42.678156 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkfms\" (UniqueName: \"kubernetes.io/projected/680006ef-a955-491e-b6a3-1ca7fcc20165-kube-api-access-kkfms\") pod \"service-ca-79bc6b8d76-fhj95\" (UID: \"680006ef-a955-491e-b6a3-1ca7fcc20165\") " pod="openshift-service-ca/service-ca-79bc6b8d76-fhj95" Mar 18 08:48:42.721029 master-0 kubenswrapper[6976]: I0318 08:48:42.720981 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-79bc6b8d76-fhj95" Mar 18 08:48:42.755316 master-0 kubenswrapper[6976]: I0318 08:48:42.755263 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-85b8696b7d-lbmqn" Mar 18 08:48:42.755940 master-0 kubenswrapper[6976]: I0318 08:48:42.755802 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-vr4gq" event={"ID":"600c92a1-56c5-497b-a8f0-746830f4180e","Type":"ContainerStarted","Data":"4c44f15be55d35dd86dd7f34654138b86f0646d97d8f7713f25983a4af46381c"} Mar 18 08:48:42.795305 master-0 kubenswrapper[6976]: I0318 08:48:42.794197 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7589bfc69c-9b2pp"] Mar 18 08:48:42.795305 master-0 kubenswrapper[6976]: I0318 08:48:42.794932 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:42.796754 master-0 kubenswrapper[6976]: I0318 08:48:42.795760 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-85b8696b7d-lbmqn"] Mar 18 08:48:42.798080 master-0 kubenswrapper[6976]: I0318 08:48:42.798015 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 08:48:42.798637 master-0 kubenswrapper[6976]: I0318 08:48:42.798589 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 08:48:42.798729 master-0 kubenswrapper[6976]: I0318 08:48:42.798644 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 08:48:42.798889 master-0 kubenswrapper[6976]: I0318 08:48:42.798841 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 08:48:42.807662 master-0 kubenswrapper[6976]: I0318 08:48:42.805734 6976 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-85b8696b7d-lbmqn"] Mar 18 08:48:42.807662 master-0 kubenswrapper[6976]: I0318 08:48:42.806250 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7589bfc69c-9b2pp"] Mar 18 08:48:42.807877 master-0 kubenswrapper[6976]: I0318 08:48:42.807769 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 08:48:42.817680 master-0 kubenswrapper[6976]: I0318 08:48:42.811035 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 08:48:42.965997 master-0 kubenswrapper[6976]: I0318 08:48:42.965935 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d4da563-a6c3-43fe-abee-ba217b634f5b-serving-cert\") pod \"controller-manager-7589bfc69c-9b2pp\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:42.966208 master-0 kubenswrapper[6976]: I0318 08:48:42.966065 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-config\") pod \"controller-manager-7589bfc69c-9b2pp\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:42.966208 master-0 kubenswrapper[6976]: I0318 08:48:42.966132 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5mtg\" (UniqueName: \"kubernetes.io/projected/7d4da563-a6c3-43fe-abee-ba217b634f5b-kube-api-access-j5mtg\") pod \"controller-manager-7589bfc69c-9b2pp\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:42.966208 master-0 kubenswrapper[6976]: I0318 08:48:42.966170 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-client-ca\") pod \"controller-manager-7589bfc69c-9b2pp\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:42.966208 master-0 kubenswrapper[6976]: I0318 08:48:42.966204 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-proxy-ca-bundles\") pod \"controller-manager-7589bfc69c-9b2pp\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:42.966372 master-0 kubenswrapper[6976]: I0318 08:48:42.966345 6976 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:42.966426 master-0 kubenswrapper[6976]: I0318 08:48:42.966380 6976 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:43.067022 master-0 kubenswrapper[6976]: I0318 08:48:43.066951 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-client-ca\") pod \"controller-manager-7589bfc69c-9b2pp\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:43.067022 master-0 kubenswrapper[6976]: I0318 08:48:43.067019 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-proxy-ca-bundles\") pod \"controller-manager-7589bfc69c-9b2pp\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:43.067275 master-0 kubenswrapper[6976]: E0318 08:48:43.067122 6976 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:48:43.067275 master-0 kubenswrapper[6976]: E0318 08:48:43.067203 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-client-ca podName:7d4da563-a6c3-43fe-abee-ba217b634f5b nodeName:}" failed. No retries permitted until 2026-03-18 08:48:43.567182158 +0000 UTC m=+23.152783753 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-client-ca") pod "controller-manager-7589bfc69c-9b2pp" (UID: "7d4da563-a6c3-43fe-abee-ba217b634f5b") : configmap "client-ca" not found Mar 18 08:48:43.068901 master-0 kubenswrapper[6976]: I0318 08:48:43.067594 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d4da563-a6c3-43fe-abee-ba217b634f5b-serving-cert\") pod \"controller-manager-7589bfc69c-9b2pp\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:43.068901 master-0 kubenswrapper[6976]: I0318 08:48:43.067650 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-config\") pod \"controller-manager-7589bfc69c-9b2pp\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:43.068901 master-0 kubenswrapper[6976]: I0318 08:48:43.067693 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5mtg\" (UniqueName: \"kubernetes.io/projected/7d4da563-a6c3-43fe-abee-ba217b634f5b-kube-api-access-j5mtg\") pod \"controller-manager-7589bfc69c-9b2pp\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:43.068901 master-0 kubenswrapper[6976]: I0318 08:48:43.068900 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-proxy-ca-bundles\") pod \"controller-manager-7589bfc69c-9b2pp\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:43.069123 master-0 kubenswrapper[6976]: E0318 08:48:43.068911 6976 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:48:43.069123 master-0 kubenswrapper[6976]: E0318 08:48:43.069005 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d4da563-a6c3-43fe-abee-ba217b634f5b-serving-cert podName:7d4da563-a6c3-43fe-abee-ba217b634f5b nodeName:}" failed. No retries permitted until 2026-03-18 08:48:43.568979459 +0000 UTC m=+23.154581064 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7d4da563-a6c3-43fe-abee-ba217b634f5b-serving-cert") pod "controller-manager-7589bfc69c-9b2pp" (UID: "7d4da563-a6c3-43fe-abee-ba217b634f5b") : secret "serving-cert" not found Mar 18 08:48:43.069629 master-0 kubenswrapper[6976]: I0318 08:48:43.069606 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-config\") pod \"controller-manager-7589bfc69c-9b2pp\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:43.091370 master-0 kubenswrapper[6976]: I0318 08:48:43.091314 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5mtg\" (UniqueName: \"kubernetes.io/projected/7d4da563-a6c3-43fe-abee-ba217b634f5b-kube-api-access-j5mtg\") pod \"controller-manager-7589bfc69c-9b2pp\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:43.269913 master-0 kubenswrapper[6976]: I0318 08:48:43.269508 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-fhj95"] Mar 18 08:48:43.283050 master-0 kubenswrapper[6976]: W0318 08:48:43.282983 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod680006ef_a955_491e_b6a3_1ca7fcc20165.slice/crio-2b116d558e216a649546918f836612a6ac48d94d4e8f2cb72966b98c7cf4e449 WatchSource:0}: Error finding container 2b116d558e216a649546918f836612a6ac48d94d4e8f2cb72966b98c7cf4e449: Status 404 returned error can't find the container with id 2b116d558e216a649546918f836612a6ac48d94d4e8f2cb72966b98c7cf4e449 Mar 18 08:48:43.471934 master-0 kubenswrapper[6976]: I0318 08:48:43.471744 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-serving-cert\") pod \"route-controller-manager-689cb4b98f-llbf6\" (UID: \"042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" Mar 18 08:48:43.471934 master-0 kubenswrapper[6976]: I0318 08:48:43.471867 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-client-ca\") pod \"route-controller-manager-689cb4b98f-llbf6\" (UID: \"042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" Mar 18 08:48:43.472183 master-0 kubenswrapper[6976]: E0318 08:48:43.471988 6976 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:48:43.472183 master-0 kubenswrapper[6976]: E0318 08:48:43.472061 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-client-ca podName:042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:47.472040025 +0000 UTC m=+27.057641620 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-client-ca") pod "route-controller-manager-689cb4b98f-llbf6" (UID: "042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7") : configmap "client-ca" not found Mar 18 08:48:43.472423 master-0 kubenswrapper[6976]: E0318 08:48:43.472388 6976 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:48:43.472423 master-0 kubenswrapper[6976]: E0318 08:48:43.472422 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-serving-cert podName:042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:47.472414535 +0000 UTC m=+27.058016120 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-serving-cert") pod "route-controller-manager-689cb4b98f-llbf6" (UID: "042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7") : secret "serving-cert" not found Mar 18 08:48:43.580416 master-0 kubenswrapper[6976]: I0318 08:48:43.580371 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d4da563-a6c3-43fe-abee-ba217b634f5b-serving-cert\") pod \"controller-manager-7589bfc69c-9b2pp\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:43.580708 master-0 kubenswrapper[6976]: I0318 08:48:43.580464 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-client-ca\") pod \"controller-manager-7589bfc69c-9b2pp\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:43.582315 master-0 kubenswrapper[6976]: E0318 08:48:43.582287 6976 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:48:43.582400 master-0 kubenswrapper[6976]: E0318 08:48:43.582361 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-client-ca podName:7d4da563-a6c3-43fe-abee-ba217b634f5b nodeName:}" failed. No retries permitted until 2026-03-18 08:48:44.582342837 +0000 UTC m=+24.167944432 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-client-ca") pod "controller-manager-7589bfc69c-9b2pp" (UID: "7d4da563-a6c3-43fe-abee-ba217b634f5b") : configmap "client-ca" not found Mar 18 08:48:43.582494 master-0 kubenswrapper[6976]: E0318 08:48:43.582293 6976 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:48:43.582581 master-0 kubenswrapper[6976]: E0318 08:48:43.582515 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d4da563-a6c3-43fe-abee-ba217b634f5b-serving-cert podName:7d4da563-a6c3-43fe-abee-ba217b634f5b nodeName:}" failed. No retries permitted until 2026-03-18 08:48:44.582505202 +0000 UTC m=+24.168106797 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7d4da563-a6c3-43fe-abee-ba217b634f5b-serving-cert") pod "controller-manager-7589bfc69c-9b2pp" (UID: "7d4da563-a6c3-43fe-abee-ba217b634f5b") : secret "serving-cert" not found Mar 18 08:48:43.759877 master-0 kubenswrapper[6976]: I0318 08:48:43.759779 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" event={"ID":"95143c61-6f91-4cd4-9411-31c2fb75d4d0","Type":"ContainerStarted","Data":"65e3988f2be17b2abc550a4cf35f76189f8aca364b91625f45824c3c0a649d5f"} Mar 18 08:48:43.760464 master-0 kubenswrapper[6976]: I0318 08:48:43.760441 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 08:48:43.761299 master-0 kubenswrapper[6976]: I0318 08:48:43.761260 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-fhj95" event={"ID":"680006ef-a955-491e-b6a3-1ca7fcc20165","Type":"ContainerStarted","Data":"f668ca32df6831c1852bfec6ac04b2b91b947fda7bf3560ef4ffe10748867750"} Mar 18 08:48:43.761299 master-0 kubenswrapper[6976]: I0318 08:48:43.761287 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-fhj95" event={"ID":"680006ef-a955-491e-b6a3-1ca7fcc20165","Type":"ContainerStarted","Data":"2b116d558e216a649546918f836612a6ac48d94d4e8f2cb72966b98c7cf4e449"} Mar 18 08:48:43.763162 master-0 kubenswrapper[6976]: I0318 08:48:43.763098 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-lhcpp" event={"ID":"c5c995cf-40a0-4cd6-87fa-96a522f7bc57","Type":"ContainerStarted","Data":"f746e038f97898d00b98367b1de674491c64f30a9f70b4c41c7083bf263f99b2"} Mar 18 08:48:43.798223 master-0 kubenswrapper[6976]: I0318 08:48:43.798111 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-79bc6b8d76-fhj95" podStartSLOduration=1.798087167 podStartE2EDuration="1.798087167s" podCreationTimestamp="2026-03-18 08:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:48:43.797660225 +0000 UTC m=+23.383261850" watchObservedRunningTime="2026-03-18 08:48:43.798087167 +0000 UTC m=+23.383688792" Mar 18 08:48:44.178555 master-0 kubenswrapper[6976]: I0318 08:48:44.178171 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62"] Mar 18 08:48:44.178950 master-0 kubenswrapper[6976]: I0318 08:48:44.178906 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" Mar 18 08:48:44.197802 master-0 kubenswrapper[6976]: I0318 08:48:44.197748 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62"] Mar 18 08:48:44.290079 master-0 kubenswrapper[6976]: I0318 08:48:44.290033 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwp9m\" (UniqueName: \"kubernetes.io/projected/4e919445-81d0-4663-8941-f596d8121305-kube-api-access-kwp9m\") pod \"csi-snapshot-controller-64854d9cff-qnc62\" (UID: \"4e919445-81d0-4663-8941-f596d8121305\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" Mar 18 08:48:44.306392 master-0 kubenswrapper[6976]: I0318 08:48:44.306351 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw"] Mar 18 08:48:44.307021 master-0 kubenswrapper[6976]: I0318 08:48:44.307003 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:48:44.309288 master-0 kubenswrapper[6976]: W0318 08:48:44.309257 6976 reflector.go:561] object-"openshift-catalogd"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-catalogd": no relationship found between node 'master-0' and this object Mar 18 08:48:44.309372 master-0 kubenswrapper[6976]: E0318 08:48:44.309305 6976 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-catalogd\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 18 08:48:44.309372 master-0 kubenswrapper[6976]: W0318 08:48:44.309350 6976 reflector.go:561] object-"openshift-catalogd"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-catalogd": no relationship found between node 'master-0' and this object Mar 18 08:48:44.309433 master-0 kubenswrapper[6976]: E0318 08:48:44.309378 6976 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-catalogd\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 18 08:48:44.309629 master-0 kubenswrapper[6976]: W0318 08:48:44.309613 6976 reflector.go:561] object-"openshift-catalogd"/"catalogd-trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "catalogd-trusted-ca-bundle" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "openshift-catalogd": no relationship found between node 'master-0' and this object Mar 18 08:48:44.309691 master-0 kubenswrapper[6976]: E0318 08:48:44.309631 6976 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"catalogd-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"catalogd-trusted-ca-bundle\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-catalogd\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 18 08:48:44.309934 master-0 kubenswrapper[6976]: W0318 08:48:44.309917 6976 reflector.go:561] object-"openshift-catalogd"/"catalogserver-cert": failed to list *v1.Secret: secrets "catalogserver-cert" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "openshift-catalogd": no relationship found between node 'master-0' and this object Mar 18 08:48:44.309975 master-0 kubenswrapper[6976]: E0318 08:48:44.309937 6976 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"catalogserver-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"catalogserver-cert\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-catalogd\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Mar 18 08:48:44.318794 master-0 kubenswrapper[6976]: I0318 08:48:44.318767 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw"] Mar 18 08:48:44.391413 master-0 kubenswrapper[6976]: I0318 08:48:44.391360 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/411d544f-e105-44f0-927a-f61406b3f070-cache\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:48:44.391413 master-0 kubenswrapper[6976]: I0318 08:48:44.391418 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/411d544f-e105-44f0-927a-f61406b3f070-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:48:44.391683 master-0 kubenswrapper[6976]: I0318 08:48:44.391445 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/411d544f-e105-44f0-927a-f61406b3f070-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:48:44.391683 master-0 kubenswrapper[6976]: I0318 08:48:44.391473 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwp9m\" (UniqueName: \"kubernetes.io/projected/4e919445-81d0-4663-8941-f596d8121305-kube-api-access-kwp9m\") pod \"csi-snapshot-controller-64854d9cff-qnc62\" (UID: \"4e919445-81d0-4663-8941-f596d8121305\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" Mar 18 08:48:44.391683 master-0 kubenswrapper[6976]: I0318 08:48:44.391487 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/411d544f-e105-44f0-927a-f61406b3f070-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:48:44.391683 master-0 kubenswrapper[6976]: I0318 08:48:44.391526 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4l97\" (UniqueName: \"kubernetes.io/projected/411d544f-e105-44f0-927a-f61406b3f070-kube-api-access-t4l97\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:48:44.391683 master-0 kubenswrapper[6976]: I0318 08:48:44.391589 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/411d544f-e105-44f0-927a-f61406b3f070-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:48:44.393459 master-0 kubenswrapper[6976]: I0318 08:48:44.393420 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm"] Mar 18 08:48:44.393990 master-0 kubenswrapper[6976]: I0318 08:48:44.393969 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 08:48:44.395855 master-0 kubenswrapper[6976]: I0318 08:48:44.395817 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 18 08:48:44.396036 master-0 kubenswrapper[6976]: I0318 08:48:44.396014 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 18 08:48:44.403880 master-0 kubenswrapper[6976]: I0318 08:48:44.403855 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm"] Mar 18 08:48:44.404962 master-0 kubenswrapper[6976]: I0318 08:48:44.404936 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 18 08:48:44.415660 master-0 kubenswrapper[6976]: I0318 08:48:44.415596 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwp9m\" (UniqueName: \"kubernetes.io/projected/4e919445-81d0-4663-8941-f596d8121305-kube-api-access-kwp9m\") pod \"csi-snapshot-controller-64854d9cff-qnc62\" (UID: \"4e919445-81d0-4663-8941-f596d8121305\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" Mar 18 08:48:44.491245 master-0 kubenswrapper[6976]: I0318 08:48:44.491202 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" Mar 18 08:48:44.492125 master-0 kubenswrapper[6976]: I0318 08:48:44.492089 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/800297fe-77fd-4f58-ade2-32a147cd7d5c-cache\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 08:48:44.492276 master-0 kubenswrapper[6976]: I0318 08:48:44.492247 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/800297fe-77fd-4f58-ade2-32a147cd7d5c-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 08:48:44.492369 master-0 kubenswrapper[6976]: I0318 08:48:44.492343 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/411d544f-e105-44f0-927a-f61406b3f070-cache\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:48:44.492432 master-0 kubenswrapper[6976]: I0318 08:48:44.492409 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/411d544f-e105-44f0-927a-f61406b3f070-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:48:44.492486 master-0 kubenswrapper[6976]: I0318 08:48:44.492462 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/411d544f-e105-44f0-927a-f61406b3f070-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:48:44.492547 master-0 kubenswrapper[6976]: I0318 08:48:44.492524 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/411d544f-e105-44f0-927a-f61406b3f070-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:48:44.492785 master-0 kubenswrapper[6976]: I0318 08:48:44.492766 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/411d544f-e105-44f0-927a-f61406b3f070-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:48:44.493609 master-0 kubenswrapper[6976]: I0318 08:48:44.492846 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/411d544f-e105-44f0-927a-f61406b3f070-cache\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:48:44.493689 master-0 kubenswrapper[6976]: I0318 08:48:44.492922 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4l97\" (UniqueName: \"kubernetes.io/projected/411d544f-e105-44f0-927a-f61406b3f070-kube-api-access-t4l97\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:48:44.493843 master-0 kubenswrapper[6976]: I0318 08:48:44.493828 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/411d544f-e105-44f0-927a-f61406b3f070-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:48:44.493916 master-0 kubenswrapper[6976]: I0318 08:48:44.493904 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/800297fe-77fd-4f58-ade2-32a147cd7d5c-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 08:48:44.494031 master-0 kubenswrapper[6976]: I0318 08:48:44.494005 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/411d544f-e105-44f0-927a-f61406b3f070-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:48:44.494072 master-0 kubenswrapper[6976]: I0318 08:48:44.494013 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw5zj\" (UniqueName: \"kubernetes.io/projected/800297fe-77fd-4f58-ade2-32a147cd7d5c-kube-api-access-tw5zj\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 08:48:44.494170 master-0 kubenswrapper[6976]: I0318 08:48:44.494156 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/800297fe-77fd-4f58-ade2-32a147cd7d5c-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 08:48:44.595406 master-0 kubenswrapper[6976]: I0318 08:48:44.595319 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/800297fe-77fd-4f58-ade2-32a147cd7d5c-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 08:48:44.595406 master-0 kubenswrapper[6976]: I0318 08:48:44.595361 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d4da563-a6c3-43fe-abee-ba217b634f5b-serving-cert\") pod \"controller-manager-7589bfc69c-9b2pp\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:44.595498 master-0 kubenswrapper[6976]: E0318 08:48:44.595454 6976 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:48:44.598577 master-0 kubenswrapper[6976]: E0318 08:48:44.595578 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d4da563-a6c3-43fe-abee-ba217b634f5b-serving-cert podName:7d4da563-a6c3-43fe-abee-ba217b634f5b nodeName:}" failed. No retries permitted until 2026-03-18 08:48:46.595541436 +0000 UTC m=+26.181143031 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7d4da563-a6c3-43fe-abee-ba217b634f5b-serving-cert") pod "controller-manager-7589bfc69c-9b2pp" (UID: "7d4da563-a6c3-43fe-abee-ba217b634f5b") : secret "serving-cert" not found Mar 18 08:48:44.598577 master-0 kubenswrapper[6976]: I0318 08:48:44.595588 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/800297fe-77fd-4f58-ade2-32a147cd7d5c-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 08:48:44.598577 master-0 kubenswrapper[6976]: I0318 08:48:44.595663 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-client-ca\") pod \"controller-manager-7589bfc69c-9b2pp\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:44.598577 master-0 kubenswrapper[6976]: I0318 08:48:44.595753 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/800297fe-77fd-4f58-ade2-32a147cd7d5c-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 08:48:44.598577 master-0 kubenswrapper[6976]: I0318 08:48:44.595771 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw5zj\" (UniqueName: \"kubernetes.io/projected/800297fe-77fd-4f58-ade2-32a147cd7d5c-kube-api-access-tw5zj\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 08:48:44.598577 master-0 kubenswrapper[6976]: E0318 08:48:44.595793 6976 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:48:44.598577 master-0 kubenswrapper[6976]: I0318 08:48:44.595796 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/800297fe-77fd-4f58-ade2-32a147cd7d5c-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 08:48:44.598577 master-0 kubenswrapper[6976]: E0318 08:48:44.595824 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-client-ca podName:7d4da563-a6c3-43fe-abee-ba217b634f5b nodeName:}" failed. No retries permitted until 2026-03-18 08:48:46.595810514 +0000 UTC m=+26.181412109 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-client-ca") pod "controller-manager-7589bfc69c-9b2pp" (UID: "7d4da563-a6c3-43fe-abee-ba217b634f5b") : configmap "client-ca" not found Mar 18 08:48:44.598577 master-0 kubenswrapper[6976]: I0318 08:48:44.595838 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/800297fe-77fd-4f58-ade2-32a147cd7d5c-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 08:48:44.598577 master-0 kubenswrapper[6976]: I0318 08:48:44.595838 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/800297fe-77fd-4f58-ade2-32a147cd7d5c-cache\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 08:48:44.598577 master-0 kubenswrapper[6976]: I0318 08:48:44.596155 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/800297fe-77fd-4f58-ade2-32a147cd7d5c-cache\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 08:48:44.601231 master-0 kubenswrapper[6976]: I0318 08:48:44.601196 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/800297fe-77fd-4f58-ade2-32a147cd7d5c-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 08:48:44.604753 master-0 kubenswrapper[6976]: I0318 08:48:44.604712 6976 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1d9ae5d-057f-45c0-8aec-14c42bdec2c8" path="/var/lib/kubelet/pods/d1d9ae5d-057f-45c0-8aec-14c42bdec2c8/volumes" Mar 18 08:48:44.613018 master-0 kubenswrapper[6976]: I0318 08:48:44.612917 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw5zj\" (UniqueName: \"kubernetes.io/projected/800297fe-77fd-4f58-ade2-32a147cd7d5c-kube-api-access-tw5zj\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 08:48:44.662094 master-0 kubenswrapper[6976]: I0318 08:48:44.662039 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62"] Mar 18 08:48:44.705493 master-0 kubenswrapper[6976]: I0318 08:48:44.705429 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 08:48:44.774533 master-0 kubenswrapper[6976]: I0318 08:48:44.774056 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" event={"ID":"4e919445-81d0-4663-8941-f596d8121305","Type":"ContainerStarted","Data":"d08575c558c437f11dbc3ff61697000e9d98f0ee2f13a6f88c21e791f90d00ab"} Mar 18 08:48:44.917431 master-0 kubenswrapper[6976]: I0318 08:48:44.917382 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm"] Mar 18 08:48:45.167327 master-0 kubenswrapper[6976]: I0318 08:48:45.167072 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 18 08:48:45.222560 master-0 kubenswrapper[6976]: I0318 08:48:45.222512 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 18 08:48:45.227006 master-0 kubenswrapper[6976]: I0318 08:48:45.226198 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/411d544f-e105-44f0-927a-f61406b3f070-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:48:45.259833 master-0 kubenswrapper[6976]: I0318 08:48:45.257560 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 18 08:48:45.898522 master-0 kubenswrapper[6976]: I0318 08:48:45.898466 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" event={"ID":"800297fe-77fd-4f58-ade2-32a147cd7d5c","Type":"ContainerStarted","Data":"c8059ff1993dfafd31ba30c72f4cc888d34b16a522b9c69b31284816d9f0ba3f"} Mar 18 08:48:45.898522 master-0 kubenswrapper[6976]: I0318 08:48:45.898522 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" event={"ID":"800297fe-77fd-4f58-ade2-32a147cd7d5c","Type":"ContainerStarted","Data":"bc52f72875ab784115d2ae7cf81aabfc20eff1b537ca6458d743902aaf4541e4"} Mar 18 08:48:45.899470 master-0 kubenswrapper[6976]: I0318 08:48:45.898536 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" event={"ID":"800297fe-77fd-4f58-ade2-32a147cd7d5c","Type":"ContainerStarted","Data":"3b274035f2ac7d46626545fefa2691ceffb107580cf6cf569c0be6a2b76a628f"} Mar 18 08:48:45.906593 master-0 kubenswrapper[6976]: I0318 08:48:45.899649 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 08:48:45.906593 master-0 kubenswrapper[6976]: I0318 08:48:45.906338 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 18 08:48:45.913533 master-0 kubenswrapper[6976]: I0318 08:48:45.913481 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4l97\" (UniqueName: \"kubernetes.io/projected/411d544f-e105-44f0-927a-f61406b3f070-kube-api-access-t4l97\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:48:45.920674 master-0 kubenswrapper[6976]: I0318 08:48:45.917037 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/411d544f-e105-44f0-927a-f61406b3f070-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:48:45.988590 master-0 kubenswrapper[6976]: I0318 08:48:45.988129 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" podStartSLOduration=1.988111207 podStartE2EDuration="1.988111207s" podCreationTimestamp="2026-03-18 08:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:48:45.987892171 +0000 UTC m=+25.573493766" watchObservedRunningTime="2026-03-18 08:48:45.988111207 +0000 UTC m=+25.573712792" Mar 18 08:48:46.122164 master-0 kubenswrapper[6976]: I0318 08:48:46.121893 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:48:46.342740 master-0 kubenswrapper[6976]: I0318 08:48:46.339691 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw"] Mar 18 08:48:46.356866 master-0 kubenswrapper[6976]: W0318 08:48:46.356820 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod411d544f_e105_44f0_927a_f61406b3f070.slice/crio-de5504f4eb957b55e61d3335016f112615d1ef2e199a2abbfb8d8f21cdee899c WatchSource:0}: Error finding container de5504f4eb957b55e61d3335016f112615d1ef2e199a2abbfb8d8f21cdee899c: Status 404 returned error can't find the container with id de5504f4eb957b55e61d3335016f112615d1ef2e199a2abbfb8d8f21cdee899c Mar 18 08:48:46.625683 master-0 kubenswrapper[6976]: I0318 08:48:46.625637 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d4da563-a6c3-43fe-abee-ba217b634f5b-serving-cert\") pod \"controller-manager-7589bfc69c-9b2pp\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:46.625896 master-0 kubenswrapper[6976]: I0318 08:48:46.625717 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-client-ca\") pod \"controller-manager-7589bfc69c-9b2pp\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:46.625896 master-0 kubenswrapper[6976]: E0318 08:48:46.625818 6976 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:48:46.625896 master-0 kubenswrapper[6976]: E0318 08:48:46.625859 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-client-ca podName:7d4da563-a6c3-43fe-abee-ba217b634f5b nodeName:}" failed. No retries permitted until 2026-03-18 08:48:50.625844692 +0000 UTC m=+30.211446287 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-client-ca") pod "controller-manager-7589bfc69c-9b2pp" (UID: "7d4da563-a6c3-43fe-abee-ba217b634f5b") : configmap "client-ca" not found Mar 18 08:48:46.626049 master-0 kubenswrapper[6976]: E0318 08:48:46.626032 6976 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:48:46.626093 master-0 kubenswrapper[6976]: E0318 08:48:46.626068 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d4da563-a6c3-43fe-abee-ba217b634f5b-serving-cert podName:7d4da563-a6c3-43fe-abee-ba217b634f5b nodeName:}" failed. No retries permitted until 2026-03-18 08:48:50.626060258 +0000 UTC m=+30.211661843 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7d4da563-a6c3-43fe-abee-ba217b634f5b-serving-cert") pod "controller-manager-7589bfc69c-9b2pp" (UID: "7d4da563-a6c3-43fe-abee-ba217b634f5b") : secret "serving-cert" not found Mar 18 08:48:46.767051 master-0 kubenswrapper[6976]: I0318 08:48:46.766976 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9f5494f5-2fsqd"] Mar 18 08:48:46.768623 master-0 kubenswrapper[6976]: I0318 08:48:46.768595 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.783182 master-0 kubenswrapper[6976]: I0318 08:48:46.778807 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 08:48:46.783182 master-0 kubenswrapper[6976]: I0318 08:48:46.779305 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 08:48:46.783182 master-0 kubenswrapper[6976]: I0318 08:48:46.779371 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 08:48:46.783182 master-0 kubenswrapper[6976]: I0318 08:48:46.779397 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 08:48:46.783182 master-0 kubenswrapper[6976]: I0318 08:48:46.779482 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Mar 18 08:48:46.783182 master-0 kubenswrapper[6976]: I0318 08:48:46.779509 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 08:48:46.783182 master-0 kubenswrapper[6976]: I0318 08:48:46.779413 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 08:48:46.783182 master-0 kubenswrapper[6976]: I0318 08:48:46.780154 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Mar 18 08:48:46.783182 master-0 kubenswrapper[6976]: I0318 08:48:46.783031 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 08:48:46.792175 master-0 kubenswrapper[6976]: I0318 08:48:46.792121 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9f5494f5-2fsqd"] Mar 18 08:48:46.792680 master-0 kubenswrapper[6976]: I0318 08:48:46.792602 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 08:48:46.832641 master-0 kubenswrapper[6976]: I0318 08:48:46.830447 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-node-pullsecrets\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.832641 master-0 kubenswrapper[6976]: I0318 08:48:46.830491 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-audit-dir\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.832641 master-0 kubenswrapper[6976]: I0318 08:48:46.830532 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-encryption-config\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.832641 master-0 kubenswrapper[6976]: I0318 08:48:46.830552 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-image-import-ca\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.832641 master-0 kubenswrapper[6976]: I0318 08:48:46.830644 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-trusted-ca-bundle\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.832641 master-0 kubenswrapper[6976]: I0318 08:48:46.830669 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-config\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.832641 master-0 kubenswrapper[6976]: I0318 08:48:46.830796 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-etcd-client\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.832641 master-0 kubenswrapper[6976]: I0318 08:48:46.830922 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpqpg\" (UniqueName: \"kubernetes.io/projected/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-kube-api-access-qpqpg\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.832641 master-0 kubenswrapper[6976]: I0318 08:48:46.831022 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-audit\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.832641 master-0 kubenswrapper[6976]: I0318 08:48:46.831108 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-serving-cert\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.832641 master-0 kubenswrapper[6976]: I0318 08:48:46.831134 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-etcd-serving-ca\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.903192 master-0 kubenswrapper[6976]: I0318 08:48:46.903136 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" event={"ID":"411d544f-e105-44f0-927a-f61406b3f070","Type":"ContainerStarted","Data":"33e56283f4dd8ac24caee15f786ea6510615bc162dce03fd6d49e923d46259ed"} Mar 18 08:48:46.903192 master-0 kubenswrapper[6976]: I0318 08:48:46.903177 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" event={"ID":"411d544f-e105-44f0-927a-f61406b3f070","Type":"ContainerStarted","Data":"de5504f4eb957b55e61d3335016f112615d1ef2e199a2abbfb8d8f21cdee899c"} Mar 18 08:48:46.933719 master-0 kubenswrapper[6976]: I0318 08:48:46.932142 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-audit\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.933719 master-0 kubenswrapper[6976]: E0318 08:48:46.932276 6976 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 18 08:48:46.933719 master-0 kubenswrapper[6976]: E0318 08:48:46.932380 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-audit podName:9b64f003-c3ed-4010-ad3e-547da7f8c8ca nodeName:}" failed. No retries permitted until 2026-03-18 08:48:47.432351112 +0000 UTC m=+27.017952747 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-audit") pod "apiserver-9f5494f5-2fsqd" (UID: "9b64f003-c3ed-4010-ad3e-547da7f8c8ca") : configmap "audit-0" not found Mar 18 08:48:46.933719 master-0 kubenswrapper[6976]: I0318 08:48:46.932833 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-serving-cert\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.933719 master-0 kubenswrapper[6976]: I0318 08:48:46.932890 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-etcd-serving-ca\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.933719 master-0 kubenswrapper[6976]: I0318 08:48:46.932959 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-node-pullsecrets\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.933719 master-0 kubenswrapper[6976]: I0318 08:48:46.933000 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-audit-dir\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.933719 master-0 kubenswrapper[6976]: I0318 08:48:46.933050 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-encryption-config\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.933719 master-0 kubenswrapper[6976]: I0318 08:48:46.933131 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-image-import-ca\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.933719 master-0 kubenswrapper[6976]: I0318 08:48:46.933248 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-trusted-ca-bundle\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.933719 master-0 kubenswrapper[6976]: I0318 08:48:46.933300 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-config\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.933719 master-0 kubenswrapper[6976]: I0318 08:48:46.933345 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-etcd-client\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.933719 master-0 kubenswrapper[6976]: I0318 08:48:46.933382 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpqpg\" (UniqueName: \"kubernetes.io/projected/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-kube-api-access-qpqpg\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.936528 master-0 kubenswrapper[6976]: I0318 08:48:46.935069 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-etcd-serving-ca\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.936528 master-0 kubenswrapper[6976]: I0318 08:48:46.935437 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-image-import-ca\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.936528 master-0 kubenswrapper[6976]: I0318 08:48:46.935611 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-config\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.936528 master-0 kubenswrapper[6976]: I0318 08:48:46.935671 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-node-pullsecrets\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.936528 master-0 kubenswrapper[6976]: I0318 08:48:46.935730 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-audit-dir\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.936528 master-0 kubenswrapper[6976]: I0318 08:48:46.936410 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-trusted-ca-bundle\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.947840 master-0 kubenswrapper[6976]: I0318 08:48:46.942081 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-serving-cert\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.947840 master-0 kubenswrapper[6976]: I0318 08:48:46.943752 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-etcd-client\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.947840 master-0 kubenswrapper[6976]: I0318 08:48:46.943985 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-encryption-config\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:46.956957 master-0 kubenswrapper[6976]: I0318 08:48:46.953539 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpqpg\" (UniqueName: \"kubernetes.io/projected/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-kube-api-access-qpqpg\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:47.438654 master-0 kubenswrapper[6976]: I0318 08:48:47.438530 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-audit\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:47.438821 master-0 kubenswrapper[6976]: E0318 08:48:47.438722 6976 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 18 08:48:47.438821 master-0 kubenswrapper[6976]: E0318 08:48:47.438768 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-audit podName:9b64f003-c3ed-4010-ad3e-547da7f8c8ca nodeName:}" failed. No retries permitted until 2026-03-18 08:48:48.438754641 +0000 UTC m=+28.024356226 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-audit") pod "apiserver-9f5494f5-2fsqd" (UID: "9b64f003-c3ed-4010-ad3e-547da7f8c8ca") : configmap "audit-0" not found Mar 18 08:48:47.540189 master-0 kubenswrapper[6976]: I0318 08:48:47.540125 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-serving-cert\") pod \"route-controller-manager-689cb4b98f-llbf6\" (UID: \"042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" Mar 18 08:48:47.540393 master-0 kubenswrapper[6976]: I0318 08:48:47.540256 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-client-ca\") pod \"route-controller-manager-689cb4b98f-llbf6\" (UID: \"042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" Mar 18 08:48:47.540393 master-0 kubenswrapper[6976]: E0318 08:48:47.540341 6976 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:48:47.540453 master-0 kubenswrapper[6976]: E0318 08:48:47.540398 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-client-ca podName:042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:55.540380702 +0000 UTC m=+35.125982297 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-client-ca") pod "route-controller-manager-689cb4b98f-llbf6" (UID: "042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7") : configmap "client-ca" not found Mar 18 08:48:47.540453 master-0 kubenswrapper[6976]: E0318 08:48:47.540416 6976 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:48:47.540516 master-0 kubenswrapper[6976]: E0318 08:48:47.540497 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-serving-cert podName:042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:55.540477615 +0000 UTC m=+35.126079250 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-serving-cert") pod "route-controller-manager-689cb4b98f-llbf6" (UID: "042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7") : secret "serving-cert" not found Mar 18 08:48:47.911113 master-0 kubenswrapper[6976]: I0318 08:48:47.911033 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" event={"ID":"411d544f-e105-44f0-927a-f61406b3f070","Type":"ContainerStarted","Data":"177f16090fa41cba4e3892f17219367dee40fa3695daf9c589750f25c0f6d328"} Mar 18 08:48:47.912262 master-0 kubenswrapper[6976]: I0318 08:48:47.911126 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:48:47.915162 master-0 kubenswrapper[6976]: I0318 08:48:47.914272 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" event={"ID":"4e919445-81d0-4663-8941-f596d8121305","Type":"ContainerStarted","Data":"b7023722fb31c9ade901bb4f5f5537f159e85f319ef882c910c37283dbf679ec"} Mar 18 08:48:47.937143 master-0 kubenswrapper[6976]: I0318 08:48:47.937061 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" podStartSLOduration=3.937040864 podStartE2EDuration="3.937040864s" podCreationTimestamp="2026-03-18 08:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:48:47.935507292 +0000 UTC m=+27.521108937" watchObservedRunningTime="2026-03-18 08:48:47.937040864 +0000 UTC m=+27.522642469" Mar 18 08:48:48.275047 master-0 kubenswrapper[6976]: I0318 08:48:48.274772 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 08:48:48.290656 master-0 kubenswrapper[6976]: I0318 08:48:48.290351 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" podStartSLOduration=1.83185662 podStartE2EDuration="4.290302972s" podCreationTimestamp="2026-03-18 08:48:44 +0000 UTC" firstStartedPulling="2026-03-18 08:48:44.67960007 +0000 UTC m=+24.265201665" lastFinishedPulling="2026-03-18 08:48:47.138046412 +0000 UTC m=+26.723648017" observedRunningTime="2026-03-18 08:48:47.961695279 +0000 UTC m=+27.547296904" watchObservedRunningTime="2026-03-18 08:48:48.290302972 +0000 UTC m=+27.875904567" Mar 18 08:48:48.451594 master-0 kubenswrapper[6976]: I0318 08:48:48.451102 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-audit\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:48.451594 master-0 kubenswrapper[6976]: E0318 08:48:48.451377 6976 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 18 08:48:48.451594 master-0 kubenswrapper[6976]: E0318 08:48:48.451465 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-audit podName:9b64f003-c3ed-4010-ad3e-547da7f8c8ca nodeName:}" failed. No retries permitted until 2026-03-18 08:48:50.451440406 +0000 UTC m=+30.037042011 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-audit") pod "apiserver-9f5494f5-2fsqd" (UID: "9b64f003-c3ed-4010-ad3e-547da7f8c8ca") : configmap "audit-0" not found Mar 18 08:48:48.767787 master-0 kubenswrapper[6976]: I0318 08:48:48.767721 6976 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-whh6r container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 18 08:48:48.768006 master-0 kubenswrapper[6976]: I0318 08:48:48.767829 6976 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" podUID="95143c61-6f91-4cd4-9411-31c2fb75d4d0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 18 08:48:48.921120 master-0 kubenswrapper[6976]: I0318 08:48:48.921060 6976 generic.go:334] "Generic (PLEG): container finished" podID="95143c61-6f91-4cd4-9411-31c2fb75d4d0" containerID="65e3988f2be17b2abc550a4cf35f76189f8aca364b91625f45824c3c0a649d5f" exitCode=0 Mar 18 08:48:48.921685 master-0 kubenswrapper[6976]: I0318 08:48:48.921124 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" event={"ID":"95143c61-6f91-4cd4-9411-31c2fb75d4d0","Type":"ContainerDied","Data":"65e3988f2be17b2abc550a4cf35f76189f8aca364b91625f45824c3c0a649d5f"} Mar 18 08:48:48.922517 master-0 kubenswrapper[6976]: I0318 08:48:48.922486 6976 scope.go:117] "RemoveContainer" containerID="65e3988f2be17b2abc550a4cf35f76189f8aca364b91625f45824c3c0a649d5f" Mar 18 08:48:49.929199 master-0 kubenswrapper[6976]: I0318 08:48:49.929141 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" event={"ID":"95143c61-6f91-4cd4-9411-31c2fb75d4d0","Type":"ContainerStarted","Data":"40b12e3472fb68e00bb6ce887f00cd26e55268f567f01e14fdcd62a66e212074"} Mar 18 08:48:49.930389 master-0 kubenswrapper[6976]: I0318 08:48:49.930334 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 08:48:50.017377 master-0 kubenswrapper[6976]: I0318 08:48:50.017321 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-9f5494f5-2fsqd"] Mar 18 08:48:50.018014 master-0 kubenswrapper[6976]: E0318 08:48:50.017825 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" podUID="9b64f003-c3ed-4010-ad3e-547da7f8c8ca" Mar 18 08:48:50.494994 master-0 kubenswrapper[6976]: I0318 08:48:50.494914 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-audit\") pod \"apiserver-9f5494f5-2fsqd\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:50.495331 master-0 kubenswrapper[6976]: E0318 08:48:50.495066 6976 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 18 08:48:50.495331 master-0 kubenswrapper[6976]: E0318 08:48:50.495132 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-audit podName:9b64f003-c3ed-4010-ad3e-547da7f8c8ca nodeName:}" failed. No retries permitted until 2026-03-18 08:48:54.495113733 +0000 UTC m=+34.080715328 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-audit") pod "apiserver-9f5494f5-2fsqd" (UID: "9b64f003-c3ed-4010-ad3e-547da7f8c8ca") : configmap "audit-0" not found Mar 18 08:48:50.609211 master-0 kubenswrapper[6976]: I0318 08:48:50.609152 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 18 08:48:50.611562 master-0 kubenswrapper[6976]: I0318 08:48:50.611527 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 08:48:50.614821 master-0 kubenswrapper[6976]: I0318 08:48:50.614760 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 18 08:48:50.620473 master-0 kubenswrapper[6976]: I0318 08:48:50.620362 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 18 08:48:50.697774 master-0 kubenswrapper[6976]: I0318 08:48:50.697716 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c393a935-1821-4742-b1bb-0ee52ada5434-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"c393a935-1821-4742-b1bb-0ee52ada5434\") " pod="openshift-etcd/installer-1-master-0" Mar 18 08:48:50.698042 master-0 kubenswrapper[6976]: I0318 08:48:50.697815 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d4da563-a6c3-43fe-abee-ba217b634f5b-serving-cert\") pod \"controller-manager-7589bfc69c-9b2pp\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:50.698042 master-0 kubenswrapper[6976]: I0318 08:48:50.697963 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c393a935-1821-4742-b1bb-0ee52ada5434-kube-api-access\") pod \"installer-1-master-0\" (UID: \"c393a935-1821-4742-b1bb-0ee52ada5434\") " pod="openshift-etcd/installer-1-master-0" Mar 18 08:48:50.698042 master-0 kubenswrapper[6976]: I0318 08:48:50.698030 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c393a935-1821-4742-b1bb-0ee52ada5434-var-lock\") pod \"installer-1-master-0\" (UID: \"c393a935-1821-4742-b1bb-0ee52ada5434\") " pod="openshift-etcd/installer-1-master-0" Mar 18 08:48:50.698256 master-0 kubenswrapper[6976]: I0318 08:48:50.698099 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-client-ca\") pod \"controller-manager-7589bfc69c-9b2pp\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:50.698256 master-0 kubenswrapper[6976]: E0318 08:48:50.698147 6976 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:48:50.698256 master-0 kubenswrapper[6976]: E0318 08:48:50.698197 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-client-ca podName:7d4da563-a6c3-43fe-abee-ba217b634f5b nodeName:}" failed. No retries permitted until 2026-03-18 08:48:58.69817808 +0000 UTC m=+38.283779675 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-client-ca") pod "controller-manager-7589bfc69c-9b2pp" (UID: "7d4da563-a6c3-43fe-abee-ba217b634f5b") : configmap "client-ca" not found Mar 18 08:48:50.705668 master-0 kubenswrapper[6976]: I0318 08:48:50.705592 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d4da563-a6c3-43fe-abee-ba217b634f5b-serving-cert\") pod \"controller-manager-7589bfc69c-9b2pp\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:50.799763 master-0 kubenswrapper[6976]: I0318 08:48:50.799534 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c393a935-1821-4742-b1bb-0ee52ada5434-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"c393a935-1821-4742-b1bb-0ee52ada5434\") " pod="openshift-etcd/installer-1-master-0" Mar 18 08:48:50.799763 master-0 kubenswrapper[6976]: I0318 08:48:50.799681 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c393a935-1821-4742-b1bb-0ee52ada5434-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"c393a935-1821-4742-b1bb-0ee52ada5434\") " pod="openshift-etcd/installer-1-master-0" Mar 18 08:48:50.800086 master-0 kubenswrapper[6976]: I0318 08:48:50.799875 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c393a935-1821-4742-b1bb-0ee52ada5434-kube-api-access\") pod \"installer-1-master-0\" (UID: \"c393a935-1821-4742-b1bb-0ee52ada5434\") " pod="openshift-etcd/installer-1-master-0" Mar 18 08:48:50.800086 master-0 kubenswrapper[6976]: I0318 08:48:50.799924 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c393a935-1821-4742-b1bb-0ee52ada5434-var-lock\") pod \"installer-1-master-0\" (UID: \"c393a935-1821-4742-b1bb-0ee52ada5434\") " pod="openshift-etcd/installer-1-master-0" Mar 18 08:48:50.800086 master-0 kubenswrapper[6976]: I0318 08:48:50.800027 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c393a935-1821-4742-b1bb-0ee52ada5434-var-lock\") pod \"installer-1-master-0\" (UID: \"c393a935-1821-4742-b1bb-0ee52ada5434\") " pod="openshift-etcd/installer-1-master-0" Mar 18 08:48:50.818348 master-0 kubenswrapper[6976]: I0318 08:48:50.818261 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c393a935-1821-4742-b1bb-0ee52ada5434-kube-api-access\") pod \"installer-1-master-0\" (UID: \"c393a935-1821-4742-b1bb-0ee52ada5434\") " pod="openshift-etcd/installer-1-master-0" Mar 18 08:48:50.932198 master-0 kubenswrapper[6976]: I0318 08:48:50.932159 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:50.942395 master-0 kubenswrapper[6976]: I0318 08:48:50.942328 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 08:48:50.944406 master-0 kubenswrapper[6976]: I0318 08:48:50.943341 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:51.002998 master-0 kubenswrapper[6976]: I0318 08:48:51.002008 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-node-pullsecrets\") pod \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " Mar 18 08:48:51.002998 master-0 kubenswrapper[6976]: I0318 08:48:51.002412 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-image-import-ca\") pod \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " Mar 18 08:48:51.002998 master-0 kubenswrapper[6976]: I0318 08:48:51.002457 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-etcd-client\") pod \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " Mar 18 08:48:51.002998 master-0 kubenswrapper[6976]: I0318 08:48:51.002487 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-config\") pod \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " Mar 18 08:48:51.002998 master-0 kubenswrapper[6976]: I0318 08:48:51.002511 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-etcd-serving-ca\") pod \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " Mar 18 08:48:51.002998 master-0 kubenswrapper[6976]: I0318 08:48:51.002544 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-audit-dir\") pod \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " Mar 18 08:48:51.002998 master-0 kubenswrapper[6976]: I0318 08:48:51.002584 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-encryption-config\") pod \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " Mar 18 08:48:51.002998 master-0 kubenswrapper[6976]: I0318 08:48:51.002624 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-serving-cert\") pod \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " Mar 18 08:48:51.002998 master-0 kubenswrapper[6976]: I0318 08:48:51.002649 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpqpg\" (UniqueName: \"kubernetes.io/projected/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-kube-api-access-qpqpg\") pod \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " Mar 18 08:48:51.002998 master-0 kubenswrapper[6976]: I0318 08:48:51.002685 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-trusted-ca-bundle\") pod \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\" (UID: \"9b64f003-c3ed-4010-ad3e-547da7f8c8ca\") " Mar 18 08:48:51.004056 master-0 kubenswrapper[6976]: I0318 08:48:51.003459 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "9b64f003-c3ed-4010-ad3e-547da7f8c8ca" (UID: "9b64f003-c3ed-4010-ad3e-547da7f8c8ca"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:48:51.004056 master-0 kubenswrapper[6976]: I0318 08:48:51.003881 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "9b64f003-c3ed-4010-ad3e-547da7f8c8ca" (UID: "9b64f003-c3ed-4010-ad3e-547da7f8c8ca"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:48:51.007809 master-0 kubenswrapper[6976]: I0318 08:48:51.007736 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9b64f003-c3ed-4010-ad3e-547da7f8c8ca" (UID: "9b64f003-c3ed-4010-ad3e-547da7f8c8ca"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:48:51.008412 master-0 kubenswrapper[6976]: I0318 08:48:51.008313 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "9b64f003-c3ed-4010-ad3e-547da7f8c8ca" (UID: "9b64f003-c3ed-4010-ad3e-547da7f8c8ca"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:48:51.008906 master-0 kubenswrapper[6976]: I0318 08:48:51.008692 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "9b64f003-c3ed-4010-ad3e-547da7f8c8ca" (UID: "9b64f003-c3ed-4010-ad3e-547da7f8c8ca"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:48:51.008906 master-0 kubenswrapper[6976]: I0318 08:48:51.008728 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-config" (OuterVolumeSpecName: "config") pod "9b64f003-c3ed-4010-ad3e-547da7f8c8ca" (UID: "9b64f003-c3ed-4010-ad3e-547da7f8c8ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:48:51.009959 master-0 kubenswrapper[6976]: I0318 08:48:51.009896 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9b64f003-c3ed-4010-ad3e-547da7f8c8ca" (UID: "9b64f003-c3ed-4010-ad3e-547da7f8c8ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:48:51.010702 master-0 kubenswrapper[6976]: I0318 08:48:51.010608 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "9b64f003-c3ed-4010-ad3e-547da7f8c8ca" (UID: "9b64f003-c3ed-4010-ad3e-547da7f8c8ca"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:48:51.012040 master-0 kubenswrapper[6976]: I0318 08:48:51.011012 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-kube-api-access-qpqpg" (OuterVolumeSpecName: "kube-api-access-qpqpg") pod "9b64f003-c3ed-4010-ad3e-547da7f8c8ca" (UID: "9b64f003-c3ed-4010-ad3e-547da7f8c8ca"). InnerVolumeSpecName "kube-api-access-qpqpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:48:51.013680 master-0 kubenswrapper[6976]: I0318 08:48:51.012246 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9b64f003-c3ed-4010-ad3e-547da7f8c8ca" (UID: "9b64f003-c3ed-4010-ad3e-547da7f8c8ca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:48:51.105034 master-0 kubenswrapper[6976]: I0318 08:48:51.104381 6976 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:51.105034 master-0 kubenswrapper[6976]: I0318 08:48:51.104420 6976 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:51.105034 master-0 kubenswrapper[6976]: I0318 08:48:51.104432 6976 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-image-import-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:51.105034 master-0 kubenswrapper[6976]: I0318 08:48:51.104444 6976 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-etcd-client\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:51.105034 master-0 kubenswrapper[6976]: I0318 08:48:51.104456 6976 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:51.105034 master-0 kubenswrapper[6976]: I0318 08:48:51.104466 6976 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:51.105034 master-0 kubenswrapper[6976]: I0318 08:48:51.104480 6976 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:51.105034 master-0 kubenswrapper[6976]: I0318 08:48:51.104491 6976 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-encryption-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:51.105034 master-0 kubenswrapper[6976]: I0318 08:48:51.104502 6976 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:51.105034 master-0 kubenswrapper[6976]: I0318 08:48:51.104514 6976 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpqpg\" (UniqueName: \"kubernetes.io/projected/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-kube-api-access-qpqpg\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:51.215348 master-0 kubenswrapper[6976]: I0318 08:48:51.215067 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 18 08:48:51.939844 master-0 kubenswrapper[6976]: I0318 08:48:51.939729 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9f5494f5-2fsqd" Mar 18 08:48:51.939844 master-0 kubenswrapper[6976]: I0318 08:48:51.939823 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"c393a935-1821-4742-b1bb-0ee52ada5434","Type":"ContainerStarted","Data":"82098974401c2078cdae0b9cda75b7a09e79d037d34e1919901dd8a75694e9fb"} Mar 18 08:48:51.941784 master-0 kubenswrapper[6976]: I0318 08:48:51.939894 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"c393a935-1821-4742-b1bb-0ee52ada5434","Type":"ContainerStarted","Data":"fb9c3d8b42af9b426126b726ec59a1846a0620aa47da4e39676529cdfdcfe989"} Mar 18 08:48:51.969590 master-0 kubenswrapper[6976]: I0318 08:48:51.969468 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=1.969446104 podStartE2EDuration="1.969446104s" podCreationTimestamp="2026-03-18 08:48:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:48:51.965216047 +0000 UTC m=+31.550817672" watchObservedRunningTime="2026-03-18 08:48:51.969446104 +0000 UTC m=+31.555047709" Mar 18 08:48:52.019616 master-0 kubenswrapper[6976]: I0318 08:48:52.017137 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-77f845f574-2wpgz"] Mar 18 08:48:52.019616 master-0 kubenswrapper[6976]: I0318 08:48:52.018267 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.024640 master-0 kubenswrapper[6976]: I0318 08:48:52.023700 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-9f5494f5-2fsqd"] Mar 18 08:48:52.034168 master-0 kubenswrapper[6976]: I0318 08:48:52.034123 6976 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-9f5494f5-2fsqd"] Mar 18 08:48:52.034360 master-0 kubenswrapper[6976]: I0318 08:48:52.034239 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 18 08:48:52.034449 master-0 kubenswrapper[6976]: I0318 08:48:52.034436 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 08:48:52.038836 master-0 kubenswrapper[6976]: I0318 08:48:52.037464 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 08:48:52.038836 master-0 kubenswrapper[6976]: I0318 08:48:52.037702 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 08:48:52.038836 master-0 kubenswrapper[6976]: I0318 08:48:52.037879 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 08:48:52.038836 master-0 kubenswrapper[6976]: I0318 08:48:52.038013 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 08:48:52.041012 master-0 kubenswrapper[6976]: I0318 08:48:52.040966 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 08:48:52.041668 master-0 kubenswrapper[6976]: I0318 08:48:52.041630 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 18 08:48:52.041921 master-0 kubenswrapper[6976]: I0318 08:48:52.041884 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 08:48:52.046997 master-0 kubenswrapper[6976]: I0318 08:48:52.046942 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 08:48:52.064612 master-0 kubenswrapper[6976]: I0318 08:48:52.061130 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-77f845f574-2wpgz"] Mar 18 08:48:52.128178 master-0 kubenswrapper[6976]: I0318 08:48:52.128113 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-image-import-ca\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.128178 master-0 kubenswrapper[6976]: I0318 08:48:52.128158 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a1f2b373-0c85-4028-9089-9e9dff5d37b5-etcd-client\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.128178 master-0 kubenswrapper[6976]: I0318 08:48:52.128176 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a1f2b373-0c85-4028-9089-9e9dff5d37b5-encryption-config\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.128450 master-0 kubenswrapper[6976]: I0318 08:48:52.128207 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f2b373-0c85-4028-9089-9e9dff5d37b5-serving-cert\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.128450 master-0 kubenswrapper[6976]: I0318 08:48:52.128224 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-etcd-serving-ca\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.128450 master-0 kubenswrapper[6976]: I0318 08:48:52.128294 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1f2b373-0c85-4028-9089-9e9dff5d37b5-audit-dir\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.128450 master-0 kubenswrapper[6976]: I0318 08:48:52.128326 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-trusted-ca-bundle\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.128450 master-0 kubenswrapper[6976]: I0318 08:48:52.128351 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a1f2b373-0c85-4028-9089-9e9dff5d37b5-node-pullsecrets\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.128450 master-0 kubenswrapper[6976]: I0318 08:48:52.128367 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lczj8\" (UniqueName: \"kubernetes.io/projected/a1f2b373-0c85-4028-9089-9e9dff5d37b5-kube-api-access-lczj8\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.128450 master-0 kubenswrapper[6976]: I0318 08:48:52.128416 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-audit\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.128450 master-0 kubenswrapper[6976]: I0318 08:48:52.128434 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-config\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.128675 master-0 kubenswrapper[6976]: I0318 08:48:52.128471 6976 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9b64f003-c3ed-4010-ad3e-547da7f8c8ca-audit\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:52.229371 master-0 kubenswrapper[6976]: I0318 08:48:52.229322 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-etcd-serving-ca\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.229545 master-0 kubenswrapper[6976]: I0318 08:48:52.229414 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1f2b373-0c85-4028-9089-9e9dff5d37b5-audit-dir\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.229545 master-0 kubenswrapper[6976]: I0318 08:48:52.229451 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-trusted-ca-bundle\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.229545 master-0 kubenswrapper[6976]: I0318 08:48:52.229477 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a1f2b373-0c85-4028-9089-9e9dff5d37b5-node-pullsecrets\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.229545 master-0 kubenswrapper[6976]: I0318 08:48:52.229494 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lczj8\" (UniqueName: \"kubernetes.io/projected/a1f2b373-0c85-4028-9089-9e9dff5d37b5-kube-api-access-lczj8\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.229735 master-0 kubenswrapper[6976]: I0318 08:48:52.229546 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-audit\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.229735 master-0 kubenswrapper[6976]: I0318 08:48:52.229578 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-config\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.229735 master-0 kubenswrapper[6976]: I0318 08:48:52.229607 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-image-import-ca\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.229735 master-0 kubenswrapper[6976]: I0318 08:48:52.229623 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a1f2b373-0c85-4028-9089-9e9dff5d37b5-etcd-client\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.229735 master-0 kubenswrapper[6976]: I0318 08:48:52.229636 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a1f2b373-0c85-4028-9089-9e9dff5d37b5-encryption-config\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.229735 master-0 kubenswrapper[6976]: I0318 08:48:52.229658 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f2b373-0c85-4028-9089-9e9dff5d37b5-serving-cert\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.230919 master-0 kubenswrapper[6976]: I0318 08:48:52.230889 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1f2b373-0c85-4028-9089-9e9dff5d37b5-audit-dir\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.231176 master-0 kubenswrapper[6976]: I0318 08:48:52.231111 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a1f2b373-0c85-4028-9089-9e9dff5d37b5-node-pullsecrets\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.231544 master-0 kubenswrapper[6976]: I0318 08:48:52.231513 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-audit\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.231620 master-0 kubenswrapper[6976]: I0318 08:48:52.231541 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-etcd-serving-ca\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.231882 master-0 kubenswrapper[6976]: I0318 08:48:52.231824 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-image-import-ca\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.232712 master-0 kubenswrapper[6976]: I0318 08:48:52.232665 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-trusted-ca-bundle\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.233103 master-0 kubenswrapper[6976]: I0318 08:48:52.233071 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-config\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.234894 master-0 kubenswrapper[6976]: I0318 08:48:52.234811 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a1f2b373-0c85-4028-9089-9e9dff5d37b5-encryption-config\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.234959 master-0 kubenswrapper[6976]: I0318 08:48:52.234864 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a1f2b373-0c85-4028-9089-9e9dff5d37b5-etcd-client\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.241963 master-0 kubenswrapper[6976]: I0318 08:48:52.241931 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f2b373-0c85-4028-9089-9e9dff5d37b5-serving-cert\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.257702 master-0 kubenswrapper[6976]: I0318 08:48:52.257645 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lczj8\" (UniqueName: \"kubernetes.io/projected/a1f2b373-0c85-4028-9089-9e9dff5d37b5-kube-api-access-lczj8\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.349264 master-0 kubenswrapper[6976]: I0318 08:48:52.349201 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:48:52.560718 master-0 kubenswrapper[6976]: I0318 08:48:52.560673 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-77f845f574-2wpgz"] Mar 18 08:48:52.568011 master-0 kubenswrapper[6976]: W0318 08:48:52.567779 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1f2b373_0c85_4028_9089_9e9dff5d37b5.slice/crio-bf2e729c77c8dcc1816b63b2326e6f2b5171c3d35ed8802a8a640112eae85e62 WatchSource:0}: Error finding container bf2e729c77c8dcc1816b63b2326e6f2b5171c3d35ed8802a8a640112eae85e62: Status 404 returned error can't find the container with id bf2e729c77c8dcc1816b63b2326e6f2b5171c3d35ed8802a8a640112eae85e62 Mar 18 08:48:52.618967 master-0 kubenswrapper[6976]: I0318 08:48:52.618881 6976 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b64f003-c3ed-4010-ad3e-547da7f8c8ca" path="/var/lib/kubelet/pods/9b64f003-c3ed-4010-ad3e-547da7f8c8ca/volumes" Mar 18 08:48:52.665589 master-0 kubenswrapper[6976]: I0318 08:48:52.665136 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9"] Mar 18 08:48:52.670593 master-0 kubenswrapper[6976]: I0318 08:48:52.665871 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:52.670593 master-0 kubenswrapper[6976]: I0318 08:48:52.667377 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 18 08:48:52.670593 master-0 kubenswrapper[6976]: I0318 08:48:52.668211 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 18 08:48:52.670593 master-0 kubenswrapper[6976]: I0318 08:48:52.668368 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 18 08:48:52.674627 master-0 kubenswrapper[6976]: I0318 08:48:52.672015 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 18 08:48:52.674627 master-0 kubenswrapper[6976]: I0318 08:48:52.672059 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 18 08:48:52.674627 master-0 kubenswrapper[6976]: I0318 08:48:52.672132 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 18 08:48:52.674627 master-0 kubenswrapper[6976]: I0318 08:48:52.672340 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 18 08:48:52.674627 master-0 kubenswrapper[6976]: I0318 08:48:52.673983 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9"] Mar 18 08:48:52.675045 master-0 kubenswrapper[6976]: I0318 08:48:52.674862 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 18 08:48:52.737973 master-0 kubenswrapper[6976]: I0318 08:48:52.737557 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/15b6612f-3a51-4a67-a566-8c520f85c6c2-audit-policies\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:52.737973 master-0 kubenswrapper[6976]: I0318 08:48:52.737689 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/15b6612f-3a51-4a67-a566-8c520f85c6c2-etcd-serving-ca\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:52.737973 master-0 kubenswrapper[6976]: I0318 08:48:52.737721 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-etcd-client\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:52.737973 master-0 kubenswrapper[6976]: I0318 08:48:52.737819 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/15b6612f-3a51-4a67-a566-8c520f85c6c2-audit-dir\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:52.737973 master-0 kubenswrapper[6976]: I0318 08:48:52.737843 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-encryption-config\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:52.737973 master-0 kubenswrapper[6976]: I0318 08:48:52.737862 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-serving-cert\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:52.737973 master-0 kubenswrapper[6976]: I0318 08:48:52.737900 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15b6612f-3a51-4a67-a566-8c520f85c6c2-trusted-ca-bundle\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:52.737973 master-0 kubenswrapper[6976]: I0318 08:48:52.737919 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcfrf\" (UniqueName: \"kubernetes.io/projected/15b6612f-3a51-4a67-a566-8c520f85c6c2-kube-api-access-dcfrf\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:52.840839 master-0 kubenswrapper[6976]: I0318 08:48:52.839558 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/15b6612f-3a51-4a67-a566-8c520f85c6c2-audit-policies\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:52.840839 master-0 kubenswrapper[6976]: I0318 08:48:52.839834 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/15b6612f-3a51-4a67-a566-8c520f85c6c2-etcd-serving-ca\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:52.840839 master-0 kubenswrapper[6976]: I0318 08:48:52.839926 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-etcd-client\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:52.840839 master-0 kubenswrapper[6976]: I0318 08:48:52.840171 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/15b6612f-3a51-4a67-a566-8c520f85c6c2-audit-dir\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:52.840839 master-0 kubenswrapper[6976]: I0318 08:48:52.840240 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-encryption-config\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:52.840839 master-0 kubenswrapper[6976]: I0318 08:48:52.840273 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-serving-cert\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:52.840839 master-0 kubenswrapper[6976]: I0318 08:48:52.840294 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/15b6612f-3a51-4a67-a566-8c520f85c6c2-audit-policies\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:52.840839 master-0 kubenswrapper[6976]: E0318 08:48:52.840525 6976 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Mar 18 08:48:52.840839 master-0 kubenswrapper[6976]: E0318 08:48:52.840692 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-serving-cert podName:15b6612f-3a51-4a67-a566-8c520f85c6c2 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:53.340667071 +0000 UTC m=+32.926268696 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-serving-cert") pod "apiserver-6ff67f5cc6-vg6s9" (UID: "15b6612f-3a51-4a67-a566-8c520f85c6c2") : secret "serving-cert" not found Mar 18 08:48:52.841731 master-0 kubenswrapper[6976]: I0318 08:48:52.840846 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/15b6612f-3a51-4a67-a566-8c520f85c6c2-audit-dir\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:52.842597 master-0 kubenswrapper[6976]: I0318 08:48:52.842510 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/15b6612f-3a51-4a67-a566-8c520f85c6c2-etcd-serving-ca\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:52.842792 master-0 kubenswrapper[6976]: I0318 08:48:52.842728 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15b6612f-3a51-4a67-a566-8c520f85c6c2-trusted-ca-bundle\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:52.842895 master-0 kubenswrapper[6976]: I0318 08:48:52.842800 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcfrf\" (UniqueName: \"kubernetes.io/projected/15b6612f-3a51-4a67-a566-8c520f85c6c2-kube-api-access-dcfrf\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:52.844068 master-0 kubenswrapper[6976]: I0318 08:48:52.844013 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15b6612f-3a51-4a67-a566-8c520f85c6c2-trusted-ca-bundle\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:52.847758 master-0 kubenswrapper[6976]: I0318 08:48:52.847657 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-etcd-client\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:52.856014 master-0 kubenswrapper[6976]: I0318 08:48:52.855968 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-encryption-config\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:52.858590 master-0 kubenswrapper[6976]: I0318 08:48:52.858511 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcfrf\" (UniqueName: \"kubernetes.io/projected/15b6612f-3a51-4a67-a566-8c520f85c6c2-kube-api-access-dcfrf\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:52.944416 master-0 kubenswrapper[6976]: I0318 08:48:52.944351 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-77f845f574-2wpgz" event={"ID":"a1f2b373-0c85-4028-9089-9e9dff5d37b5","Type":"ContainerStarted","Data":"bf2e729c77c8dcc1816b63b2326e6f2b5171c3d35ed8802a8a640112eae85e62"} Mar 18 08:48:53.350788 master-0 kubenswrapper[6976]: I0318 08:48:53.350684 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-serving-cert\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:53.351110 master-0 kubenswrapper[6976]: E0318 08:48:53.350891 6976 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Mar 18 08:48:53.351110 master-0 kubenswrapper[6976]: E0318 08:48:53.350984 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-serving-cert podName:15b6612f-3a51-4a67-a566-8c520f85c6c2 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:54.350967668 +0000 UTC m=+33.936569263 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-serving-cert") pod "apiserver-6ff67f5cc6-vg6s9" (UID: "15b6612f-3a51-4a67-a566-8c520f85c6c2") : secret "serving-cert" not found Mar 18 08:48:53.452313 master-0 kubenswrapper[6976]: I0318 08:48:53.452240 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:53.452520 master-0 kubenswrapper[6976]: I0318 08:48:53.452325 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:53.452520 master-0 kubenswrapper[6976]: I0318 08:48:53.452373 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-k6xp5\" (UID: \"2d0da6e3-3887-4361-8eae-e7447f9ff72c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:48:53.452520 master-0 kubenswrapper[6976]: I0318 08:48:53.452400 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert\") pod \"olm-operator-5c9796789-twp27\" (UID: \"c00ee838-424f-482b-942f-08f0952a5ccd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:48:53.452520 master-0 kubenswrapper[6976]: I0318 08:48:53.452423 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:53.452520 master-0 kubenswrapper[6976]: I0318 08:48:53.452485 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:48:53.452767 master-0 kubenswrapper[6976]: I0318 08:48:53.452528 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:53.452816 master-0 kubenswrapper[6976]: E0318 08:48:53.452788 6976 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 18 08:48:53.452923 master-0 kubenswrapper[6976]: E0318 08:48:53.452887 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert podName:2d0da6e3-3887-4361-8eae-e7447f9ff72c nodeName:}" failed. No retries permitted until 2026-03-18 08:49:25.452860927 +0000 UTC m=+65.038462562 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-k6xp5" (UID: "2d0da6e3-3887-4361-8eae-e7447f9ff72c") : secret "package-server-manager-serving-cert" not found Mar 18 08:48:53.453789 master-0 kubenswrapper[6976]: E0318 08:48:53.453725 6976 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 18 08:48:53.453885 master-0 kubenswrapper[6976]: E0318 08:48:53.453826 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert podName:c00ee838-424f-482b-942f-08f0952a5ccd nodeName:}" failed. No retries permitted until 2026-03-18 08:49:25.453804683 +0000 UTC m=+65.039406298 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert") pod "olm-operator-5c9796789-twp27" (UID: "c00ee838-424f-482b-942f-08f0952a5ccd") : secret "olm-operator-serving-cert" not found Mar 18 08:48:53.453885 master-0 kubenswrapper[6976]: E0318 08:48:53.453852 6976 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 18 08:48:53.454003 master-0 kubenswrapper[6976]: E0318 08:48:53.453899 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics podName:ca9d4694-8675-47c5-819f-89bba9dcdc0f nodeName:}" failed. No retries permitted until 2026-03-18 08:49:25.453887266 +0000 UTC m=+65.039488871 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-m862c" (UID: "ca9d4694-8675-47c5-819f-89bba9dcdc0f") : secret "marketplace-operator-metrics" not found Mar 18 08:48:53.456668 master-0 kubenswrapper[6976]: I0318 08:48:53.456624 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:53.457207 master-0 kubenswrapper[6976]: I0318 08:48:53.457168 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:53.457914 master-0 kubenswrapper[6976]: I0318 08:48:53.457860 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:53.457998 master-0 kubenswrapper[6976]: I0318 08:48:53.457863 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:53.553529 master-0 kubenswrapper[6976]: I0318 08:48:53.553430 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-25rbq\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:48:53.553529 master-0 kubenswrapper[6976]: I0318 08:48:53.553520 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:48:53.553943 master-0 kubenswrapper[6976]: E0318 08:48:53.553704 6976 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 18 08:48:53.553943 master-0 kubenswrapper[6976]: E0318 08:48:53.553787 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs podName:7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac nodeName:}" failed. No retries permitted until 2026-03-18 08:49:25.553767249 +0000 UTC m=+65.139368854 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-25rbq" (UID: "7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac") : secret "multus-admission-controller-secret" not found Mar 18 08:48:53.553943 master-0 kubenswrapper[6976]: I0318 08:48:53.553823 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls\") pod \"dns-operator-9c5679d8f-2649q\" (UID: \"4192ea44-a38c-4b70-93c3-8070da2ffe2f\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 08:48:53.553943 master-0 kubenswrapper[6976]: I0318 08:48:53.553900 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:48:53.553943 master-0 kubenswrapper[6976]: I0318 08:48:53.553949 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:48:53.554379 master-0 kubenswrapper[6976]: I0318 08:48:53.554020 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert\") pod \"catalog-operator-68f85b4d6c-hhn7l\" (UID: \"f6833a48-fccb-42bd-ac90-29f08d5bf7e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:48:53.554379 master-0 kubenswrapper[6976]: E0318 08:48:53.554066 6976 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 18 08:48:53.554379 master-0 kubenswrapper[6976]: E0318 08:48:53.554152 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs podName:e48101ca-f356-45e3-93d7-4e17b8d8066c nodeName:}" failed. No retries permitted until 2026-03-18 08:49:25.554130149 +0000 UTC m=+65.139731754 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs") pod "network-metrics-daemon-2xs9n" (UID: "e48101ca-f356-45e3-93d7-4e17b8d8066c") : secret "metrics-daemon-secret" not found Mar 18 08:48:53.554379 master-0 kubenswrapper[6976]: E0318 08:48:53.554284 6976 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 18 08:48:53.554977 master-0 kubenswrapper[6976]: E0318 08:48:53.554921 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert podName:f6833a48-fccb-42bd-ac90-29f08d5bf7e8 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:25.554743646 +0000 UTC m=+65.140345331 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert") pod "catalog-operator-68f85b4d6c-hhn7l" (UID: "f6833a48-fccb-42bd-ac90-29f08d5bf7e8") : secret "catalog-operator-serving-cert" not found Mar 18 08:48:53.555081 master-0 kubenswrapper[6976]: E0318 08:48:53.555040 6976 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:53.555154 master-0 kubenswrapper[6976]: E0318 08:48:53.555086 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls podName:09269324-c908-474d-818f-5cd49406f1e2 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:25.555070725 +0000 UTC m=+65.140672380 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-8vfjr" (UID: "09269324-c908-474d-818f-5cd49406f1e2") : secret "cluster-monitoring-operator-tls" not found Mar 18 08:48:53.558819 master-0 kubenswrapper[6976]: I0318 08:48:53.558724 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert\") pod \"cluster-version-operator-56d8475767-t9zrr\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:48:53.562265 master-0 kubenswrapper[6976]: I0318 08:48:53.559531 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls\") pod \"dns-operator-9c5679d8f-2649q\" (UID: \"4192ea44-a38c-4b70-93c3-8070da2ffe2f\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 08:48:53.651398 master-0 kubenswrapper[6976]: I0318 08:48:53.651269 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 08:48:53.651940 master-0 kubenswrapper[6976]: I0318 08:48:53.651787 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 08:48:53.654620 master-0 kubenswrapper[6976]: I0318 08:48:53.654559 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 08:48:53.657547 master-0 kubenswrapper[6976]: I0318 08:48:53.657485 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 08:48:53.657725 master-0 kubenswrapper[6976]: I0318 08:48:53.657559 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:48:53.851513 master-0 kubenswrapper[6976]: I0318 08:48:53.847085 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp"] Mar 18 08:48:53.949538 master-0 kubenswrapper[6976]: I0318 08:48:53.949329 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" event={"ID":"85d361a2-3f83-4857-b96e-3e98fcf33463","Type":"ContainerStarted","Data":"b2a09192199dc47c2741f7796cc99b6c355559f7813fa31bd13f72c5529a9df3"} Mar 18 08:48:53.950625 master-0 kubenswrapper[6976]: I0318 08:48:53.950533 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" event={"ID":"1deb139f-1903-417e-835c-28abdd156cdb","Type":"ContainerStarted","Data":"345478a9f31c33009fc0312365cde9a2e83761bfa6df9d1f8521197057d19304"} Mar 18 08:48:54.097887 master-0 kubenswrapper[6976]: I0318 08:48:54.092843 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-2649q"] Mar 18 08:48:54.097887 master-0 kubenswrapper[6976]: I0318 08:48:54.097786 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf"] Mar 18 08:48:54.104787 master-0 kubenswrapper[6976]: I0318 08:48:54.104740 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh"] Mar 18 08:48:54.273169 master-0 kubenswrapper[6976]: I0318 08:48:54.272336 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 08:48:54.341716 master-0 kubenswrapper[6976]: I0318 08:48:54.341633 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 08:48:54.348602 master-0 kubenswrapper[6976]: I0318 08:48:54.348541 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:48:54.352789 master-0 kubenswrapper[6976]: I0318 08:48:54.352735 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 18 08:48:54.363270 master-0 kubenswrapper[6976]: I0318 08:48:54.363242 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-serving-cert\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:54.364173 master-0 kubenswrapper[6976]: E0318 08:48:54.364125 6976 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Mar 18 08:48:54.364245 master-0 kubenswrapper[6976]: E0318 08:48:54.364215 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-serving-cert podName:15b6612f-3a51-4a67-a566-8c520f85c6c2 nodeName:}" failed. No retries permitted until 2026-03-18 08:48:56.364192468 +0000 UTC m=+35.949794133 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-serving-cert") pod "apiserver-6ff67f5cc6-vg6s9" (UID: "15b6612f-3a51-4a67-a566-8c520f85c6c2") : secret "serving-cert" not found Mar 18 08:48:54.464880 master-0 kubenswrapper[6976]: I0318 08:48:54.464809 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ecb08ad-f7f1-466e-9b8a-b162137bfebd-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"9ecb08ad-f7f1-466e-9b8a-b162137bfebd\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:48:54.464880 master-0 kubenswrapper[6976]: I0318 08:48:54.464870 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ecb08ad-f7f1-466e-9b8a-b162137bfebd-var-lock\") pod \"installer-1-master-0\" (UID: \"9ecb08ad-f7f1-466e-9b8a-b162137bfebd\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:48:54.465215 master-0 kubenswrapper[6976]: I0318 08:48:54.465111 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ecb08ad-f7f1-466e-9b8a-b162137bfebd-kube-api-access\") pod \"installer-1-master-0\" (UID: \"9ecb08ad-f7f1-466e-9b8a-b162137bfebd\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:48:54.566339 master-0 kubenswrapper[6976]: I0318 08:48:54.566173 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ecb08ad-f7f1-466e-9b8a-b162137bfebd-kube-api-access\") pod \"installer-1-master-0\" (UID: \"9ecb08ad-f7f1-466e-9b8a-b162137bfebd\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:48:54.566339 master-0 kubenswrapper[6976]: I0318 08:48:54.566328 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ecb08ad-f7f1-466e-9b8a-b162137bfebd-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"9ecb08ad-f7f1-466e-9b8a-b162137bfebd\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:48:54.566731 master-0 kubenswrapper[6976]: I0318 08:48:54.566354 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ecb08ad-f7f1-466e-9b8a-b162137bfebd-var-lock\") pod \"installer-1-master-0\" (UID: \"9ecb08ad-f7f1-466e-9b8a-b162137bfebd\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:48:54.566731 master-0 kubenswrapper[6976]: I0318 08:48:54.566500 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ecb08ad-f7f1-466e-9b8a-b162137bfebd-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"9ecb08ad-f7f1-466e-9b8a-b162137bfebd\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:48:54.566731 master-0 kubenswrapper[6976]: I0318 08:48:54.566535 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ecb08ad-f7f1-466e-9b8a-b162137bfebd-var-lock\") pod \"installer-1-master-0\" (UID: \"9ecb08ad-f7f1-466e-9b8a-b162137bfebd\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:48:54.582173 master-0 kubenswrapper[6976]: I0318 08:48:54.582117 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 08:48:54.709305 master-0 kubenswrapper[6976]: I0318 08:48:54.709267 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 08:48:54.980969 master-0 kubenswrapper[6976]: I0318 08:48:54.980901 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" event={"ID":"4192ea44-a38c-4b70-93c3-8070da2ffe2f","Type":"ContainerStarted","Data":"3a452f53888d80954ddda76e2511f1f532656825d47ec252e4f76b2a75b26a96"} Mar 18 08:48:54.982452 master-0 kubenswrapper[6976]: I0318 08:48:54.982394 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" event={"ID":"6c56e1ac-8752-4e46-8692-93716087f0e0","Type":"ContainerStarted","Data":"3ac5162bd81def353052ebf597421eb671cb88aec927ef74f518a70f421eb249"} Mar 18 08:48:54.983339 master-0 kubenswrapper[6976]: I0318 08:48:54.983311 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" event={"ID":"bf7a3329-a04c-4b58-9364-b907c00cbe08","Type":"ContainerStarted","Data":"3ec66dd169d08be1b920bf1865303a7a46910236130e7f06946e53376569a93c"} Mar 18 08:48:55.576143 master-0 kubenswrapper[6976]: I0318 08:48:55.576051 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-serving-cert\") pod \"route-controller-manager-689cb4b98f-llbf6\" (UID: \"042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" Mar 18 08:48:55.576402 master-0 kubenswrapper[6976]: I0318 08:48:55.576232 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-client-ca\") pod \"route-controller-manager-689cb4b98f-llbf6\" (UID: \"042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7\") " pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" Mar 18 08:48:55.576402 master-0 kubenswrapper[6976]: E0318 08:48:55.576230 6976 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 18 08:48:55.576402 master-0 kubenswrapper[6976]: E0318 08:48:55.576317 6976 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 18 08:48:55.576402 master-0 kubenswrapper[6976]: E0318 08:48:55.576344 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-serving-cert podName:042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:11.57631451 +0000 UTC m=+51.161916135 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-serving-cert") pod "route-controller-manager-689cb4b98f-llbf6" (UID: "042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7") : secret "serving-cert" not found Mar 18 08:48:55.576402 master-0 kubenswrapper[6976]: E0318 08:48:55.576372 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-client-ca podName:042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:11.576357911 +0000 UTC m=+51.161959506 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-client-ca") pod "route-controller-manager-689cb4b98f-llbf6" (UID: "042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7") : configmap "client-ca" not found Mar 18 08:48:55.623648 master-0 kubenswrapper[6976]: I0318 08:48:55.618856 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ecb08ad-f7f1-466e-9b8a-b162137bfebd-kube-api-access\") pod \"installer-1-master-0\" (UID: \"9ecb08ad-f7f1-466e-9b8a-b162137bfebd\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:48:55.872323 master-0 kubenswrapper[6976]: I0318 08:48:55.872157 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:48:56.125538 master-0 kubenswrapper[6976]: I0318 08:48:56.125153 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:48:56.390457 master-0 kubenswrapper[6976]: I0318 08:48:56.390339 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-serving-cert\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:48:56.391238 master-0 kubenswrapper[6976]: E0318 08:48:56.391186 6976 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Mar 18 08:48:56.391308 master-0 kubenswrapper[6976]: E0318 08:48:56.391274 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-serving-cert podName:15b6612f-3a51-4a67-a566-8c520f85c6c2 nodeName:}" failed. No retries permitted until 2026-03-18 08:49:00.391240854 +0000 UTC m=+39.976842459 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-serving-cert") pod "apiserver-6ff67f5cc6-vg6s9" (UID: "15b6612f-3a51-4a67-a566-8c520f85c6c2") : secret "serving-cert" not found Mar 18 08:48:56.848655 master-0 kubenswrapper[6976]: I0318 08:48:56.848582 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 08:48:58.021219 master-0 kubenswrapper[6976]: I0318 08:48:58.021170 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"9ecb08ad-f7f1-466e-9b8a-b162137bfebd","Type":"ContainerStarted","Data":"82b8a76b2600434ebee5ee4ed08dbb29d8146560821e8d2a1127da598ab1b928"} Mar 18 08:48:58.372813 master-0 kubenswrapper[6976]: I0318 08:48:58.372755 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7589bfc69c-9b2pp"] Mar 18 08:48:58.373423 master-0 kubenswrapper[6976]: E0318 08:48:58.373381 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" podUID="7d4da563-a6c3-43fe-abee-ba217b634f5b" Mar 18 08:48:58.395218 master-0 kubenswrapper[6976]: I0318 08:48:58.395167 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6"] Mar 18 08:48:58.395486 master-0 kubenswrapper[6976]: E0318 08:48:58.395460 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" podUID="042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7" Mar 18 08:48:58.738590 master-0 kubenswrapper[6976]: I0318 08:48:58.738480 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-client-ca\") pod \"controller-manager-7589bfc69c-9b2pp\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:58.739337 master-0 kubenswrapper[6976]: I0318 08:48:58.739302 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-client-ca\") pod \"controller-manager-7589bfc69c-9b2pp\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:59.030977 master-0 kubenswrapper[6976]: I0318 08:48:59.030345 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" Mar 18 08:48:59.030977 master-0 kubenswrapper[6976]: I0318 08:48:59.030370 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:59.041772 master-0 kubenswrapper[6976]: I0318 08:48:59.041356 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" Mar 18 08:48:59.045983 master-0 kubenswrapper[6976]: I0318 08:48:59.045953 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:48:59.142689 master-0 kubenswrapper[6976]: I0318 08:48:59.142131 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-config\") pod \"042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7\" (UID: \"042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7\") " Mar 18 08:48:59.142689 master-0 kubenswrapper[6976]: I0318 08:48:59.142211 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-proxy-ca-bundles\") pod \"7d4da563-a6c3-43fe-abee-ba217b634f5b\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " Mar 18 08:48:59.142689 master-0 kubenswrapper[6976]: I0318 08:48:59.142251 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-client-ca\") pod \"7d4da563-a6c3-43fe-abee-ba217b634f5b\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " Mar 18 08:48:59.142689 master-0 kubenswrapper[6976]: I0318 08:48:59.142272 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5mtg\" (UniqueName: \"kubernetes.io/projected/7d4da563-a6c3-43fe-abee-ba217b634f5b-kube-api-access-j5mtg\") pod \"7d4da563-a6c3-43fe-abee-ba217b634f5b\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " Mar 18 08:48:59.142689 master-0 kubenswrapper[6976]: I0318 08:48:59.142306 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d4da563-a6c3-43fe-abee-ba217b634f5b-serving-cert\") pod \"7d4da563-a6c3-43fe-abee-ba217b634f5b\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " Mar 18 08:48:59.142689 master-0 kubenswrapper[6976]: I0318 08:48:59.142327 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvpt6\" (UniqueName: \"kubernetes.io/projected/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-kube-api-access-nvpt6\") pod \"042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7\" (UID: \"042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7\") " Mar 18 08:48:59.142689 master-0 kubenswrapper[6976]: I0318 08:48:59.142356 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-config\") pod \"7d4da563-a6c3-43fe-abee-ba217b634f5b\" (UID: \"7d4da563-a6c3-43fe-abee-ba217b634f5b\") " Mar 18 08:48:59.144178 master-0 kubenswrapper[6976]: I0318 08:48:59.142978 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7d4da563-a6c3-43fe-abee-ba217b634f5b" (UID: "7d4da563-a6c3-43fe-abee-ba217b634f5b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:48:59.144178 master-0 kubenswrapper[6976]: I0318 08:48:59.143023 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-config" (OuterVolumeSpecName: "config") pod "7d4da563-a6c3-43fe-abee-ba217b634f5b" (UID: "7d4da563-a6c3-43fe-abee-ba217b634f5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:48:59.144178 master-0 kubenswrapper[6976]: I0318 08:48:59.143760 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-client-ca" (OuterVolumeSpecName: "client-ca") pod "7d4da563-a6c3-43fe-abee-ba217b634f5b" (UID: "7d4da563-a6c3-43fe-abee-ba217b634f5b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:48:59.144884 master-0 kubenswrapper[6976]: I0318 08:48:59.144817 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-config" (OuterVolumeSpecName: "config") pod "042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7" (UID: "042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:48:59.147752 master-0 kubenswrapper[6976]: I0318 08:48:59.147693 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-kube-api-access-nvpt6" (OuterVolumeSpecName: "kube-api-access-nvpt6") pod "042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7" (UID: "042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7"). InnerVolumeSpecName "kube-api-access-nvpt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:48:59.148128 master-0 kubenswrapper[6976]: I0318 08:48:59.148059 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d4da563-a6c3-43fe-abee-ba217b634f5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7d4da563-a6c3-43fe-abee-ba217b634f5b" (UID: "7d4da563-a6c3-43fe-abee-ba217b634f5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:48:59.153351 master-0 kubenswrapper[6976]: I0318 08:48:59.153295 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d4da563-a6c3-43fe-abee-ba217b634f5b-kube-api-access-j5mtg" (OuterVolumeSpecName: "kube-api-access-j5mtg") pod "7d4da563-a6c3-43fe-abee-ba217b634f5b" (UID: "7d4da563-a6c3-43fe-abee-ba217b634f5b"). InnerVolumeSpecName "kube-api-access-j5mtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:48:59.243535 master-0 kubenswrapper[6976]: I0318 08:48:59.243430 6976 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:59.243535 master-0 kubenswrapper[6976]: I0318 08:48:59.243493 6976 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:59.243535 master-0 kubenswrapper[6976]: I0318 08:48:59.243513 6976 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5mtg\" (UniqueName: \"kubernetes.io/projected/7d4da563-a6c3-43fe-abee-ba217b634f5b-kube-api-access-j5mtg\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:59.243535 master-0 kubenswrapper[6976]: I0318 08:48:59.243532 6976 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d4da563-a6c3-43fe-abee-ba217b634f5b-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:59.243535 master-0 kubenswrapper[6976]: I0318 08:48:59.243550 6976 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvpt6\" (UniqueName: \"kubernetes.io/projected/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-kube-api-access-nvpt6\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:59.244026 master-0 kubenswrapper[6976]: I0318 08:48:59.243610 6976 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d4da563-a6c3-43fe-abee-ba217b634f5b-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:48:59.244026 master-0 kubenswrapper[6976]: I0318 08:48:59.243630 6976 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:00.036907 master-0 kubenswrapper[6976]: I0318 08:49:00.036346 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"9ecb08ad-f7f1-466e-9b8a-b162137bfebd","Type":"ContainerStarted","Data":"41f1b4a9ad9735ba06f958721642c2418eaf1c66f2ef6009427324642c726c07"} Mar 18 08:49:00.036907 master-0 kubenswrapper[6976]: I0318 08:49:00.036368 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7589bfc69c-9b2pp" Mar 18 08:49:00.036907 master-0 kubenswrapper[6976]: I0318 08:49:00.036437 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6" Mar 18 08:49:00.075204 master-0 kubenswrapper[6976]: I0318 08:49:00.074363 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=6.074336116 podStartE2EDuration="6.074336116s" podCreationTimestamp="2026-03-18 08:48:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:00.052395047 +0000 UTC m=+39.637996642" watchObservedRunningTime="2026-03-18 08:49:00.074336116 +0000 UTC m=+39.659937711" Mar 18 08:49:00.082878 master-0 kubenswrapper[6976]: I0318 08:49:00.081985 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p"] Mar 18 08:49:00.082878 master-0 kubenswrapper[6976]: I0318 08:49:00.082627 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p" Mar 18 08:49:00.085594 master-0 kubenswrapper[6976]: I0318 08:49:00.085530 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 08:49:00.087351 master-0 kubenswrapper[6976]: I0318 08:49:00.087249 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 08:49:00.087925 master-0 kubenswrapper[6976]: I0318 08:49:00.087614 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 08:49:00.088074 master-0 kubenswrapper[6976]: I0318 08:49:00.087983 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 08:49:00.089441 master-0 kubenswrapper[6976]: I0318 08:49:00.089394 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 08:49:00.090740 master-0 kubenswrapper[6976]: I0318 08:49:00.090707 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6"] Mar 18 08:49:00.092459 master-0 kubenswrapper[6976]: I0318 08:49:00.092432 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p"] Mar 18 08:49:00.093448 master-0 kubenswrapper[6976]: I0318 08:49:00.093414 6976 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-689cb4b98f-llbf6"] Mar 18 08:49:00.116305 master-0 kubenswrapper[6976]: I0318 08:49:00.116138 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7589bfc69c-9b2pp"] Mar 18 08:49:00.117781 master-0 kubenswrapper[6976]: I0318 08:49:00.117759 6976 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7589bfc69c-9b2pp"] Mar 18 08:49:00.159198 master-0 kubenswrapper[6976]: I0318 08:49:00.159065 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfw4w\" (UniqueName: \"kubernetes.io/projected/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a-kube-api-access-wfw4w\") pod \"route-controller-manager-6c95d4578f-2qx7p\" (UID: \"d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a\") " pod="openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p" Mar 18 08:49:00.159198 master-0 kubenswrapper[6976]: I0318 08:49:00.159119 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a-serving-cert\") pod \"route-controller-manager-6c95d4578f-2qx7p\" (UID: \"d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a\") " pod="openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p" Mar 18 08:49:00.159503 master-0 kubenswrapper[6976]: I0318 08:49:00.159299 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a-config\") pod \"route-controller-manager-6c95d4578f-2qx7p\" (UID: \"d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a\") " pod="openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p" Mar 18 08:49:00.159503 master-0 kubenswrapper[6976]: I0318 08:49:00.159465 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a-client-ca\") pod \"route-controller-manager-6c95d4578f-2qx7p\" (UID: \"d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a\") " pod="openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p" Mar 18 08:49:00.260435 master-0 kubenswrapper[6976]: I0318 08:49:00.259989 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfw4w\" (UniqueName: \"kubernetes.io/projected/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a-kube-api-access-wfw4w\") pod \"route-controller-manager-6c95d4578f-2qx7p\" (UID: \"d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a\") " pod="openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p" Mar 18 08:49:00.260435 master-0 kubenswrapper[6976]: I0318 08:49:00.260209 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a-serving-cert\") pod \"route-controller-manager-6c95d4578f-2qx7p\" (UID: \"d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a\") " pod="openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p" Mar 18 08:49:00.260435 master-0 kubenswrapper[6976]: I0318 08:49:00.260269 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a-config\") pod \"route-controller-manager-6c95d4578f-2qx7p\" (UID: \"d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a\") " pod="openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p" Mar 18 08:49:00.260435 master-0 kubenswrapper[6976]: I0318 08:49:00.260334 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a-client-ca\") pod \"route-controller-manager-6c95d4578f-2qx7p\" (UID: \"d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a\") " pod="openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p" Mar 18 08:49:00.260823 master-0 kubenswrapper[6976]: I0318 08:49:00.260540 6976 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:00.260823 master-0 kubenswrapper[6976]: I0318 08:49:00.260587 6976 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:00.261398 master-0 kubenswrapper[6976]: I0318 08:49:00.261360 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a-config\") pod \"route-controller-manager-6c95d4578f-2qx7p\" (UID: \"d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a\") " pod="openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p" Mar 18 08:49:00.262823 master-0 kubenswrapper[6976]: I0318 08:49:00.262318 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a-client-ca\") pod \"route-controller-manager-6c95d4578f-2qx7p\" (UID: \"d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a\") " pod="openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p" Mar 18 08:49:00.265822 master-0 kubenswrapper[6976]: I0318 08:49:00.265760 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a-serving-cert\") pod \"route-controller-manager-6c95d4578f-2qx7p\" (UID: \"d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a\") " pod="openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p" Mar 18 08:49:00.281740 master-0 kubenswrapper[6976]: I0318 08:49:00.281699 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfw4w\" (UniqueName: \"kubernetes.io/projected/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a-kube-api-access-wfw4w\") pod \"route-controller-manager-6c95d4578f-2qx7p\" (UID: \"d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a\") " pod="openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p" Mar 18 08:49:00.409511 master-0 kubenswrapper[6976]: I0318 08:49:00.409032 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p" Mar 18 08:49:00.462864 master-0 kubenswrapper[6976]: I0318 08:49:00.462807 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-serving-cert\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:49:00.465447 master-0 kubenswrapper[6976]: I0318 08:49:00.465413 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-serving-cert\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:49:00.491578 master-0 kubenswrapper[6976]: I0318 08:49:00.491500 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:49:00.603931 master-0 kubenswrapper[6976]: I0318 08:49:00.603400 6976 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7" path="/var/lib/kubelet/pods/042b1b8b-d0c7-4ce1-94e5-0a65d373e9d7/volumes" Mar 18 08:49:00.604224 master-0 kubenswrapper[6976]: I0318 08:49:00.604190 6976 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d4da563-a6c3-43fe-abee-ba217b634f5b" path="/var/lib/kubelet/pods/7d4da563-a6c3-43fe-abee-ba217b634f5b/volumes" Mar 18 08:49:02.181995 master-0 kubenswrapper[6976]: I0318 08:49:02.177648 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 08:49:02.213147 master-0 kubenswrapper[6976]: I0318 08:49:02.210505 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5dbd749c-2j5zn"] Mar 18 08:49:02.214368 master-0 kubenswrapper[6976]: I0318 08:49:02.214335 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" Mar 18 08:49:02.223736 master-0 kubenswrapper[6976]: I0318 08:49:02.218003 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 08:49:02.223736 master-0 kubenswrapper[6976]: I0318 08:49:02.218270 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 08:49:02.223736 master-0 kubenswrapper[6976]: I0318 08:49:02.218476 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 08:49:02.223736 master-0 kubenswrapper[6976]: I0318 08:49:02.218659 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 08:49:02.227157 master-0 kubenswrapper[6976]: I0318 08:49:02.227116 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 08:49:02.234114 master-0 kubenswrapper[6976]: I0318 08:49:02.228997 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5dbd749c-2j5zn"] Mar 18 08:49:02.234114 master-0 kubenswrapper[6976]: I0318 08:49:02.230406 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 08:49:02.387100 master-0 kubenswrapper[6976]: I0318 08:49:02.386701 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfptn\" (UniqueName: \"kubernetes.io/projected/ffcdc45e-fa1e-4864-8d5f-b9916719112f-kube-api-access-gfptn\") pod \"controller-manager-5dbd749c-2j5zn\" (UID: \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\") " pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" Mar 18 08:49:02.387296 master-0 kubenswrapper[6976]: I0318 08:49:02.387111 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffcdc45e-fa1e-4864-8d5f-b9916719112f-serving-cert\") pod \"controller-manager-5dbd749c-2j5zn\" (UID: \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\") " pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" Mar 18 08:49:02.387296 master-0 kubenswrapper[6976]: I0318 08:49:02.387173 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffcdc45e-fa1e-4864-8d5f-b9916719112f-client-ca\") pod \"controller-manager-5dbd749c-2j5zn\" (UID: \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\") " pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" Mar 18 08:49:02.387379 master-0 kubenswrapper[6976]: I0318 08:49:02.387325 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ffcdc45e-fa1e-4864-8d5f-b9916719112f-proxy-ca-bundles\") pod \"controller-manager-5dbd749c-2j5zn\" (UID: \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\") " pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" Mar 18 08:49:02.387706 master-0 kubenswrapper[6976]: I0318 08:49:02.387388 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffcdc45e-fa1e-4864-8d5f-b9916719112f-config\") pod \"controller-manager-5dbd749c-2j5zn\" (UID: \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\") " pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" Mar 18 08:49:02.488341 master-0 kubenswrapper[6976]: I0318 08:49:02.488281 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffcdc45e-fa1e-4864-8d5f-b9916719112f-config\") pod \"controller-manager-5dbd749c-2j5zn\" (UID: \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\") " pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" Mar 18 08:49:02.488513 master-0 kubenswrapper[6976]: I0318 08:49:02.488367 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfptn\" (UniqueName: \"kubernetes.io/projected/ffcdc45e-fa1e-4864-8d5f-b9916719112f-kube-api-access-gfptn\") pod \"controller-manager-5dbd749c-2j5zn\" (UID: \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\") " pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" Mar 18 08:49:02.488619 master-0 kubenswrapper[6976]: I0318 08:49:02.488578 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffcdc45e-fa1e-4864-8d5f-b9916719112f-serving-cert\") pod \"controller-manager-5dbd749c-2j5zn\" (UID: \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\") " pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" Mar 18 08:49:02.488725 master-0 kubenswrapper[6976]: I0318 08:49:02.488703 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffcdc45e-fa1e-4864-8d5f-b9916719112f-client-ca\") pod \"controller-manager-5dbd749c-2j5zn\" (UID: \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\") " pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" Mar 18 08:49:02.488893 master-0 kubenswrapper[6976]: I0318 08:49:02.488869 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ffcdc45e-fa1e-4864-8d5f-b9916719112f-proxy-ca-bundles\") pod \"controller-manager-5dbd749c-2j5zn\" (UID: \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\") " pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" Mar 18 08:49:02.491109 master-0 kubenswrapper[6976]: I0318 08:49:02.491070 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffcdc45e-fa1e-4864-8d5f-b9916719112f-client-ca\") pod \"controller-manager-5dbd749c-2j5zn\" (UID: \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\") " pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" Mar 18 08:49:02.491314 master-0 kubenswrapper[6976]: I0318 08:49:02.491284 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffcdc45e-fa1e-4864-8d5f-b9916719112f-config\") pod \"controller-manager-5dbd749c-2j5zn\" (UID: \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\") " pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" Mar 18 08:49:02.492332 master-0 kubenswrapper[6976]: I0318 08:49:02.492300 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ffcdc45e-fa1e-4864-8d5f-b9916719112f-proxy-ca-bundles\") pod \"controller-manager-5dbd749c-2j5zn\" (UID: \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\") " pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" Mar 18 08:49:02.504711 master-0 kubenswrapper[6976]: I0318 08:49:02.504688 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffcdc45e-fa1e-4864-8d5f-b9916719112f-serving-cert\") pod \"controller-manager-5dbd749c-2j5zn\" (UID: \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\") " pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" Mar 18 08:49:02.505517 master-0 kubenswrapper[6976]: I0318 08:49:02.505478 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfptn\" (UniqueName: \"kubernetes.io/projected/ffcdc45e-fa1e-4864-8d5f-b9916719112f-kube-api-access-gfptn\") pod \"controller-manager-5dbd749c-2j5zn\" (UID: \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\") " pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" Mar 18 08:49:02.556849 master-0 kubenswrapper[6976]: I0318 08:49:02.556812 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" Mar 18 08:49:03.395133 master-0 kubenswrapper[6976]: I0318 08:49:03.393117 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9"] Mar 18 08:49:03.411408 master-0 kubenswrapper[6976]: I0318 08:49:03.411335 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5dbd749c-2j5zn"] Mar 18 08:49:03.426167 master-0 kubenswrapper[6976]: W0318 08:49:03.426118 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffcdc45e_fa1e_4864_8d5f_b9916719112f.slice/crio-63ae45db776e3ed942737171110f734a8575d6642281b093e32333f1afd4c378 WatchSource:0}: Error finding container 63ae45db776e3ed942737171110f734a8575d6642281b093e32333f1afd4c378: Status 404 returned error can't find the container with id 63ae45db776e3ed942737171110f734a8575d6642281b093e32333f1afd4c378 Mar 18 08:49:03.456055 master-0 kubenswrapper[6976]: I0318 08:49:03.455831 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p"] Mar 18 08:49:03.485026 master-0 kubenswrapper[6976]: W0318 08:49:03.484908 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2b4b463_bdd3_4624_9aa2_8ed7e7f7529a.slice/crio-4e0b2c850f8305c249d90e52b80380962f9cb2f5c3d5e9878c440f1b035def58 WatchSource:0}: Error finding container 4e0b2c850f8305c249d90e52b80380962f9cb2f5c3d5e9878c440f1b035def58: Status 404 returned error can't find the container with id 4e0b2c850f8305c249d90e52b80380962f9cb2f5c3d5e9878c440f1b035def58 Mar 18 08:49:03.650279 master-0 kubenswrapper[6976]: I0318 08:49:03.650101 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-84qxz"] Mar 18 08:49:03.650825 master-0 kubenswrapper[6976]: I0318 08:49:03.650797 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.810495 master-0 kubenswrapper[6976]: I0318 08:49:03.810426 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxshz\" (UniqueName: \"kubernetes.io/projected/cda44dd8-895a-4eab-bedc-83f38efa2482-kube-api-access-bxshz\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.810495 master-0 kubenswrapper[6976]: I0318 08:49:03.810493 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-tuned\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.810720 master-0 kubenswrapper[6976]: I0318 08:49:03.810540 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-run\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.810720 master-0 kubenswrapper[6976]: I0318 08:49:03.810601 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-sysctl-conf\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.810720 master-0 kubenswrapper[6976]: I0318 08:49:03.810637 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-kubernetes\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.810720 master-0 kubenswrapper[6976]: I0318 08:49:03.810657 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cda44dd8-895a-4eab-bedc-83f38efa2482-tmp\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.810720 master-0 kubenswrapper[6976]: I0318 08:49:03.810689 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-lib-modules\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.810720 master-0 kubenswrapper[6976]: I0318 08:49:03.810717 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-sysconfig\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.810880 master-0 kubenswrapper[6976]: I0318 08:49:03.810753 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-sys\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.810880 master-0 kubenswrapper[6976]: I0318 08:49:03.810775 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-systemd\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.810880 master-0 kubenswrapper[6976]: I0318 08:49:03.810797 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-var-lib-kubelet\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.810880 master-0 kubenswrapper[6976]: I0318 08:49:03.810839 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-host\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.810880 master-0 kubenswrapper[6976]: I0318 08:49:03.810866 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-modprobe-d\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.811015 master-0 kubenswrapper[6976]: I0318 08:49:03.810885 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-sysctl-d\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.912517 master-0 kubenswrapper[6976]: I0318 08:49:03.912219 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxshz\" (UniqueName: \"kubernetes.io/projected/cda44dd8-895a-4eab-bedc-83f38efa2482-kube-api-access-bxshz\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.912517 master-0 kubenswrapper[6976]: I0318 08:49:03.912459 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-tuned\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.912517 master-0 kubenswrapper[6976]: I0318 08:49:03.912489 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-run\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.912517 master-0 kubenswrapper[6976]: I0318 08:49:03.912507 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-sysctl-conf\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.912802 master-0 kubenswrapper[6976]: I0318 08:49:03.912530 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-kubernetes\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.912802 master-0 kubenswrapper[6976]: I0318 08:49:03.912544 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cda44dd8-895a-4eab-bedc-83f38efa2482-tmp\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.912802 master-0 kubenswrapper[6976]: I0318 08:49:03.912583 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-lib-modules\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.912802 master-0 kubenswrapper[6976]: I0318 08:49:03.912598 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-sysconfig\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.912802 master-0 kubenswrapper[6976]: I0318 08:49:03.912623 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-sys\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.912802 master-0 kubenswrapper[6976]: I0318 08:49:03.912637 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-systemd\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.912802 master-0 kubenswrapper[6976]: I0318 08:49:03.912659 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-var-lib-kubelet\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.912802 master-0 kubenswrapper[6976]: I0318 08:49:03.912682 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-host\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.912802 master-0 kubenswrapper[6976]: I0318 08:49:03.912701 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-modprobe-d\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.912802 master-0 kubenswrapper[6976]: I0318 08:49:03.912716 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-sysctl-d\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.913085 master-0 kubenswrapper[6976]: I0318 08:49:03.912836 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-sysctl-d\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.914451 master-0 kubenswrapper[6976]: I0318 08:49:03.914415 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-kubernetes\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.914530 master-0 kubenswrapper[6976]: I0318 08:49:03.914492 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-run\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.914617 master-0 kubenswrapper[6976]: I0318 08:49:03.914579 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-sysctl-conf\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.915333 master-0 kubenswrapper[6976]: I0318 08:49:03.915313 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-sysconfig\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.915394 master-0 kubenswrapper[6976]: I0318 08:49:03.915384 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-lib-modules\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.915436 master-0 kubenswrapper[6976]: I0318 08:49:03.915414 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-var-lib-kubelet\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.915475 master-0 kubenswrapper[6976]: I0318 08:49:03.915442 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-sys\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.915475 master-0 kubenswrapper[6976]: I0318 08:49:03.915468 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-systemd\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.915541 master-0 kubenswrapper[6976]: I0318 08:49:03.915493 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-host\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.915606 master-0 kubenswrapper[6976]: I0318 08:49:03.915541 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-modprobe-d\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.926417 master-0 kubenswrapper[6976]: I0318 08:49:03.923804 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cda44dd8-895a-4eab-bedc-83f38efa2482-tmp\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.926417 master-0 kubenswrapper[6976]: I0318 08:49:03.923937 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-tuned\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.931539 master-0 kubenswrapper[6976]: I0318 08:49:03.931475 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxshz\" (UniqueName: \"kubernetes.io/projected/cda44dd8-895a-4eab-bedc-83f38efa2482-kube-api-access-bxshz\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.978320 master-0 kubenswrapper[6976]: I0318 08:49:03.978273 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 08:49:03.992236 master-0 kubenswrapper[6976]: W0318 08:49:03.992188 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcda44dd8_895a_4eab_bedc_83f38efa2482.slice/crio-0684dee41c1e39e5caf45ad3ecf969187fd91ecc03801196dd3add2b2639bc89 WatchSource:0}: Error finding container 0684dee41c1e39e5caf45ad3ecf969187fd91ecc03801196dd3add2b2639bc89: Status 404 returned error can't find the container with id 0684dee41c1e39e5caf45ad3ecf969187fd91ecc03801196dd3add2b2639bc89 Mar 18 08:49:04.029506 master-0 kubenswrapper[6976]: I0318 08:49:04.029452 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 08:49:04.029699 master-0 kubenswrapper[6976]: I0318 08:49:04.029674 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="9ecb08ad-f7f1-466e-9b8a-b162137bfebd" containerName="installer" containerID="cri-o://41f1b4a9ad9735ba06f958721642c2418eaf1c66f2ef6009427324642c726c07" gracePeriod=30 Mar 18 08:49:04.065839 master-0 kubenswrapper[6976]: I0318 08:49:04.065004 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" event={"ID":"1deb139f-1903-417e-835c-28abdd156cdb","Type":"ContainerStarted","Data":"32b058c6d1ee238c753a849a50cae740263263767c61bf2151475052399455e0"} Mar 18 08:49:04.066962 master-0 kubenswrapper[6976]: I0318 08:49:04.066923 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p" event={"ID":"d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a","Type":"ContainerStarted","Data":"4e0b2c850f8305c249d90e52b80380962f9cb2f5c3d5e9878c440f1b035def58"} Mar 18 08:49:04.067996 master-0 kubenswrapper[6976]: I0318 08:49:04.067964 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-84qxz" event={"ID":"cda44dd8-895a-4eab-bedc-83f38efa2482","Type":"ContainerStarted","Data":"0684dee41c1e39e5caf45ad3ecf969187fd91ecc03801196dd3add2b2639bc89"} Mar 18 08:49:04.068758 master-0 kubenswrapper[6976]: I0318 08:49:04.068738 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" event={"ID":"15b6612f-3a51-4a67-a566-8c520f85c6c2","Type":"ContainerStarted","Data":"5185a35bdc4ad1949570c4b3508eb6c84e58ffd468abe9bcc3bb2a0cb406ece2"} Mar 18 08:49:04.069840 master-0 kubenswrapper[6976]: I0318 08:49:04.069812 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" event={"ID":"85d361a2-3f83-4857-b96e-3e98fcf33463","Type":"ContainerStarted","Data":"ced431e20627e427a21cb8fd42c6730cdc4485d805ff7ffe44cd7d69e197e11e"} Mar 18 08:49:04.070984 master-0 kubenswrapper[6976]: I0318 08:49:04.070964 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" event={"ID":"4192ea44-a38c-4b70-93c3-8070da2ffe2f","Type":"ContainerStarted","Data":"543ef1f97cff969e6370b175b52f7c692bef20bd03e98ec770a71aa739fb18d8"} Mar 18 08:49:04.071035 master-0 kubenswrapper[6976]: I0318 08:49:04.070989 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" event={"ID":"4192ea44-a38c-4b70-93c3-8070da2ffe2f","Type":"ContainerStarted","Data":"441b716996a656f0736ef26b65283eecf60d2ff2d2b30877544e1b32018ea12c"} Mar 18 08:49:04.072488 master-0 kubenswrapper[6976]: I0318 08:49:04.072455 6976 generic.go:334] "Generic (PLEG): container finished" podID="a1f2b373-0c85-4028-9089-9e9dff5d37b5" containerID="1820c7b891866f2da2386244d406850e2ca41824fea9e45fc4a61e84270cbb14" exitCode=0 Mar 18 08:49:04.072553 master-0 kubenswrapper[6976]: I0318 08:49:04.072504 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-77f845f574-2wpgz" event={"ID":"a1f2b373-0c85-4028-9089-9e9dff5d37b5","Type":"ContainerDied","Data":"1820c7b891866f2da2386244d406850e2ca41824fea9e45fc4a61e84270cbb14"} Mar 18 08:49:04.074554 master-0 kubenswrapper[6976]: I0318 08:49:04.074518 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" event={"ID":"6c56e1ac-8752-4e46-8692-93716087f0e0","Type":"ContainerStarted","Data":"e78bbb854e3d9943cb3fa89e45e1e19c6f32f1732fab0adc69b2c8517be93fa3"} Mar 18 08:49:04.084520 master-0 kubenswrapper[6976]: I0318 08:49:04.083911 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" event={"ID":"bf7a3329-a04c-4b58-9364-b907c00cbe08","Type":"ContainerStarted","Data":"c41e4b2e1f633d6bebd3c94666cdd5cd5f07049109f2dd4dd903a34237dc6d5a"} Mar 18 08:49:04.084520 master-0 kubenswrapper[6976]: I0318 08:49:04.083949 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" event={"ID":"bf7a3329-a04c-4b58-9364-b907c00cbe08","Type":"ContainerStarted","Data":"9d25c9c9b5ced91c32a1b9dd7e48ce6b3235062e8dd7fa065d776452831b8b1b"} Mar 18 08:49:04.085522 master-0 kubenswrapper[6976]: I0318 08:49:04.084668 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" event={"ID":"ffcdc45e-fa1e-4864-8d5f-b9916719112f","Type":"ContainerStarted","Data":"63ae45db776e3ed942737171110f734a8575d6642281b093e32333f1afd4c378"} Mar 18 08:49:04.258021 master-0 kubenswrapper[6976]: I0318 08:49:04.255130 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-pj485"] Mar 18 08:49:04.258021 master-0 kubenswrapper[6976]: I0318 08:49:04.255898 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pj485" Mar 18 08:49:04.266750 master-0 kubenswrapper[6976]: I0318 08:49:04.266706 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 18 08:49:04.268241 master-0 kubenswrapper[6976]: I0318 08:49:04.266763 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 18 08:49:04.268241 master-0 kubenswrapper[6976]: I0318 08:49:04.267008 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 18 08:49:04.268241 master-0 kubenswrapper[6976]: I0318 08:49:04.267125 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 18 08:49:04.268494 master-0 kubenswrapper[6976]: I0318 08:49:04.268472 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-pj485"] Mar 18 08:49:04.423176 master-0 kubenswrapper[6976]: I0318 08:49:04.422885 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mkcq\" (UniqueName: \"kubernetes.io/projected/b2588f5c-327c-49cc-8cfb-0cce1ad758d5-kube-api-access-9mkcq\") pod \"dns-default-pj485\" (UID: \"b2588f5c-327c-49cc-8cfb-0cce1ad758d5\") " pod="openshift-dns/dns-default-pj485" Mar 18 08:49:04.424283 master-0 kubenswrapper[6976]: I0318 08:49:04.423208 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b2588f5c-327c-49cc-8cfb-0cce1ad758d5-metrics-tls\") pod \"dns-default-pj485\" (UID: \"b2588f5c-327c-49cc-8cfb-0cce1ad758d5\") " pod="openshift-dns/dns-default-pj485" Mar 18 08:49:04.424283 master-0 kubenswrapper[6976]: I0318 08:49:04.423253 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2588f5c-327c-49cc-8cfb-0cce1ad758d5-config-volume\") pod \"dns-default-pj485\" (UID: \"b2588f5c-327c-49cc-8cfb-0cce1ad758d5\") " pod="openshift-dns/dns-default-pj485" Mar 18 08:49:04.526503 master-0 kubenswrapper[6976]: I0318 08:49:04.524904 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b2588f5c-327c-49cc-8cfb-0cce1ad758d5-metrics-tls\") pod \"dns-default-pj485\" (UID: \"b2588f5c-327c-49cc-8cfb-0cce1ad758d5\") " pod="openshift-dns/dns-default-pj485" Mar 18 08:49:04.526657 master-0 kubenswrapper[6976]: I0318 08:49:04.526506 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2588f5c-327c-49cc-8cfb-0cce1ad758d5-config-volume\") pod \"dns-default-pj485\" (UID: \"b2588f5c-327c-49cc-8cfb-0cce1ad758d5\") " pod="openshift-dns/dns-default-pj485" Mar 18 08:49:04.526807 master-0 kubenswrapper[6976]: I0318 08:49:04.526776 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mkcq\" (UniqueName: \"kubernetes.io/projected/b2588f5c-327c-49cc-8cfb-0cce1ad758d5-kube-api-access-9mkcq\") pod \"dns-default-pj485\" (UID: \"b2588f5c-327c-49cc-8cfb-0cce1ad758d5\") " pod="openshift-dns/dns-default-pj485" Mar 18 08:49:04.530430 master-0 kubenswrapper[6976]: I0318 08:49:04.530396 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2588f5c-327c-49cc-8cfb-0cce1ad758d5-config-volume\") pod \"dns-default-pj485\" (UID: \"b2588f5c-327c-49cc-8cfb-0cce1ad758d5\") " pod="openshift-dns/dns-default-pj485" Mar 18 08:49:04.532600 master-0 kubenswrapper[6976]: I0318 08:49:04.532573 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b2588f5c-327c-49cc-8cfb-0cce1ad758d5-metrics-tls\") pod \"dns-default-pj485\" (UID: \"b2588f5c-327c-49cc-8cfb-0cce1ad758d5\") " pod="openshift-dns/dns-default-pj485" Mar 18 08:49:04.547851 master-0 kubenswrapper[6976]: I0318 08:49:04.547806 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mkcq\" (UniqueName: \"kubernetes.io/projected/b2588f5c-327c-49cc-8cfb-0cce1ad758d5-kube-api-access-9mkcq\") pod \"dns-default-pj485\" (UID: \"b2588f5c-327c-49cc-8cfb-0cce1ad758d5\") " pod="openshift-dns/dns-default-pj485" Mar 18 08:49:04.596290 master-0 kubenswrapper[6976]: I0318 08:49:04.595938 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pj485" Mar 18 08:49:04.658391 master-0 kubenswrapper[6976]: I0318 08:49:04.657522 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-thqlt"] Mar 18 08:49:04.658391 master-0 kubenswrapper[6976]: I0318 08:49:04.658018 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-thqlt" Mar 18 08:49:04.839934 master-0 kubenswrapper[6976]: I0318 08:49:04.839512 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqjsq\" (UniqueName: \"kubernetes.io/projected/c5e43736-33c3-4949-98ca-971332541d64-kube-api-access-sqjsq\") pod \"node-resolver-thqlt\" (UID: \"c5e43736-33c3-4949-98ca-971332541d64\") " pod="openshift-dns/node-resolver-thqlt" Mar 18 08:49:04.839934 master-0 kubenswrapper[6976]: I0318 08:49:04.839856 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c5e43736-33c3-4949-98ca-971332541d64-hosts-file\") pod \"node-resolver-thqlt\" (UID: \"c5e43736-33c3-4949-98ca-971332541d64\") " pod="openshift-dns/node-resolver-thqlt" Mar 18 08:49:04.946340 master-0 kubenswrapper[6976]: I0318 08:49:04.941107 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqjsq\" (UniqueName: \"kubernetes.io/projected/c5e43736-33c3-4949-98ca-971332541d64-kube-api-access-sqjsq\") pod \"node-resolver-thqlt\" (UID: \"c5e43736-33c3-4949-98ca-971332541d64\") " pod="openshift-dns/node-resolver-thqlt" Mar 18 08:49:04.946340 master-0 kubenswrapper[6976]: I0318 08:49:04.941165 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c5e43736-33c3-4949-98ca-971332541d64-hosts-file\") pod \"node-resolver-thqlt\" (UID: \"c5e43736-33c3-4949-98ca-971332541d64\") " pod="openshift-dns/node-resolver-thqlt" Mar 18 08:49:04.946340 master-0 kubenswrapper[6976]: I0318 08:49:04.941338 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c5e43736-33c3-4949-98ca-971332541d64-hosts-file\") pod \"node-resolver-thqlt\" (UID: \"c5e43736-33c3-4949-98ca-971332541d64\") " pod="openshift-dns/node-resolver-thqlt" Mar 18 08:49:04.996948 master-0 kubenswrapper[6976]: I0318 08:49:04.986511 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqjsq\" (UniqueName: \"kubernetes.io/projected/c5e43736-33c3-4949-98ca-971332541d64-kube-api-access-sqjsq\") pod \"node-resolver-thqlt\" (UID: \"c5e43736-33c3-4949-98ca-971332541d64\") " pod="openshift-dns/node-resolver-thqlt" Mar 18 08:49:05.000683 master-0 kubenswrapper[6976]: I0318 08:49:04.997405 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-pj485"] Mar 18 08:49:05.050781 master-0 kubenswrapper[6976]: I0318 08:49:05.050624 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-thqlt" Mar 18 08:49:05.079174 master-0 kubenswrapper[6976]: W0318 08:49:05.075209 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5e43736_33c3_4949_98ca_971332541d64.slice/crio-c5b1f5eb93f4781ad7eb457481d37161ebc8d0cd97fd5fc8d694689aa1b5790c WatchSource:0}: Error finding container c5b1f5eb93f4781ad7eb457481d37161ebc8d0cd97fd5fc8d694689aa1b5790c: Status 404 returned error can't find the container with id c5b1f5eb93f4781ad7eb457481d37161ebc8d0cd97fd5fc8d694689aa1b5790c Mar 18 08:49:05.092085 master-0 kubenswrapper[6976]: I0318 08:49:05.092034 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pj485" event={"ID":"b2588f5c-327c-49cc-8cfb-0cce1ad758d5","Type":"ContainerStarted","Data":"d6446762bc6a0b43e14b052b6b1fde0273d338b8feb7a11225c2093e688292fc"} Mar 18 08:49:05.094906 master-0 kubenswrapper[6976]: I0318 08:49:05.094312 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-77f845f574-2wpgz" event={"ID":"a1f2b373-0c85-4028-9089-9e9dff5d37b5","Type":"ContainerStarted","Data":"4a27d79d0e539dc77a427a74449e40a925e7f0e0e136fd73ab5846b8690c7eb6"} Mar 18 08:49:05.094906 master-0 kubenswrapper[6976]: I0318 08:49:05.094336 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-77f845f574-2wpgz" event={"ID":"a1f2b373-0c85-4028-9089-9e9dff5d37b5","Type":"ContainerStarted","Data":"3702949aa2dc3e3d8832668591e12cc5601952ad900676b5ff8358de2d26c5d5"} Mar 18 08:49:05.096069 master-0 kubenswrapper[6976]: I0318 08:49:05.096029 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-thqlt" event={"ID":"c5e43736-33c3-4949-98ca-971332541d64","Type":"ContainerStarted","Data":"c5b1f5eb93f4781ad7eb457481d37161ebc8d0cd97fd5fc8d694689aa1b5790c"} Mar 18 08:49:05.099127 master-0 kubenswrapper[6976]: I0318 08:49:05.099091 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-84qxz" event={"ID":"cda44dd8-895a-4eab-bedc-83f38efa2482","Type":"ContainerStarted","Data":"db35f7e86b335d0f765db08c33486e5e70510ad144bc18647a9611d6e8fbcd5d"} Mar 18 08:49:05.114266 master-0 kubenswrapper[6976]: I0318 08:49:05.113203 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-77f845f574-2wpgz" podStartSLOduration=4.544059501 podStartE2EDuration="15.113185987s" podCreationTimestamp="2026-03-18 08:48:50 +0000 UTC" firstStartedPulling="2026-03-18 08:48:52.569947735 +0000 UTC m=+32.155549320" lastFinishedPulling="2026-03-18 08:49:03.139074211 +0000 UTC m=+42.724675806" observedRunningTime="2026-03-18 08:49:05.112269712 +0000 UTC m=+44.697871307" watchObservedRunningTime="2026-03-18 08:49:05.113185987 +0000 UTC m=+44.698787582" Mar 18 08:49:05.130359 master-0 kubenswrapper[6976]: I0318 08:49:05.130239 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-84qxz" podStartSLOduration=2.130174369 podStartE2EDuration="2.130174369s" podCreationTimestamp="2026-03-18 08:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:05.127387102 +0000 UTC m=+44.712988727" watchObservedRunningTime="2026-03-18 08:49:05.130174369 +0000 UTC m=+44.715775974" Mar 18 08:49:06.109631 master-0 kubenswrapper[6976]: I0318 08:49:06.109265 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-thqlt" event={"ID":"c5e43736-33c3-4949-98ca-971332541d64","Type":"ContainerStarted","Data":"c61398421b7fd2ee4f000b7637a49de3d8239c44ef8b0f5b4846650287432380"} Mar 18 08:49:06.631453 master-0 kubenswrapper[6976]: I0318 08:49:06.629955 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-thqlt" podStartSLOduration=2.629931365 podStartE2EDuration="2.629931365s" podCreationTimestamp="2026-03-18 08:49:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:06.176424516 +0000 UTC m=+45.762026101" watchObservedRunningTime="2026-03-18 08:49:06.629931365 +0000 UTC m=+46.215532960" Mar 18 08:49:06.631712 master-0 kubenswrapper[6976]: I0318 08:49:06.631671 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 08:49:06.632296 master-0 kubenswrapper[6976]: I0318 08:49:06.632277 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:06.632545 master-0 kubenswrapper[6976]: I0318 08:49:06.632507 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 08:49:06.769403 master-0 kubenswrapper[6976]: I0318 08:49:06.769358 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7af34a29-e58b-4b94-9f4d-ea5801a1851e-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"7af34a29-e58b-4b94-9f4d-ea5801a1851e\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:06.769403 master-0 kubenswrapper[6976]: I0318 08:49:06.769408 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7af34a29-e58b-4b94-9f4d-ea5801a1851e-var-lock\") pod \"installer-2-master-0\" (UID: \"7af34a29-e58b-4b94-9f4d-ea5801a1851e\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:06.769689 master-0 kubenswrapper[6976]: I0318 08:49:06.769434 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7af34a29-e58b-4b94-9f4d-ea5801a1851e-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7af34a29-e58b-4b94-9f4d-ea5801a1851e\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:06.872590 master-0 kubenswrapper[6976]: I0318 08:49:06.870390 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7af34a29-e58b-4b94-9f4d-ea5801a1851e-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"7af34a29-e58b-4b94-9f4d-ea5801a1851e\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:06.872590 master-0 kubenswrapper[6976]: I0318 08:49:06.870450 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7af34a29-e58b-4b94-9f4d-ea5801a1851e-var-lock\") pod \"installer-2-master-0\" (UID: \"7af34a29-e58b-4b94-9f4d-ea5801a1851e\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:06.872590 master-0 kubenswrapper[6976]: I0318 08:49:06.870477 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7af34a29-e58b-4b94-9f4d-ea5801a1851e-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7af34a29-e58b-4b94-9f4d-ea5801a1851e\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:06.872590 master-0 kubenswrapper[6976]: I0318 08:49:06.870943 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7af34a29-e58b-4b94-9f4d-ea5801a1851e-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"7af34a29-e58b-4b94-9f4d-ea5801a1851e\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:06.872590 master-0 kubenswrapper[6976]: I0318 08:49:06.870986 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7af34a29-e58b-4b94-9f4d-ea5801a1851e-var-lock\") pod \"installer-2-master-0\" (UID: \"7af34a29-e58b-4b94-9f4d-ea5801a1851e\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:06.928035 master-0 kubenswrapper[6976]: I0318 08:49:06.926326 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7af34a29-e58b-4b94-9f4d-ea5801a1851e-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7af34a29-e58b-4b94-9f4d-ea5801a1851e\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:06.966200 master-0 kubenswrapper[6976]: I0318 08:49:06.966124 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:07.350679 master-0 kubenswrapper[6976]: I0318 08:49:07.350064 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:49:07.350679 master-0 kubenswrapper[6976]: I0318 08:49:07.350116 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:49:07.356154 master-0 kubenswrapper[6976]: I0318 08:49:07.356063 6976 patch_prober.go:28] interesting pod/apiserver-77f845f574-2wpgz container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 18 08:49:07.356154 master-0 kubenswrapper[6976]: [+]log ok Mar 18 08:49:07.356154 master-0 kubenswrapper[6976]: [+]etcd ok Mar 18 08:49:07.356154 master-0 kubenswrapper[6976]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 18 08:49:07.356154 master-0 kubenswrapper[6976]: [+]poststarthook/generic-apiserver-start-informers ok Mar 18 08:49:07.356154 master-0 kubenswrapper[6976]: [+]poststarthook/max-in-flight-filter ok Mar 18 08:49:07.356154 master-0 kubenswrapper[6976]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 18 08:49:07.356154 master-0 kubenswrapper[6976]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 18 08:49:07.356154 master-0 kubenswrapper[6976]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 18 08:49:07.356154 master-0 kubenswrapper[6976]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 18 08:49:07.356154 master-0 kubenswrapper[6976]: [+]poststarthook/project.openshift.io-projectcache ok Mar 18 08:49:07.356154 master-0 kubenswrapper[6976]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 18 08:49:07.356154 master-0 kubenswrapper[6976]: [+]poststarthook/openshift.io-startinformers ok Mar 18 08:49:07.356154 master-0 kubenswrapper[6976]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 18 08:49:07.356154 master-0 kubenswrapper[6976]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 18 08:49:07.356154 master-0 kubenswrapper[6976]: livez check failed Mar 18 08:49:07.356588 master-0 kubenswrapper[6976]: I0318 08:49:07.356144 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-77f845f574-2wpgz" podUID="a1f2b373-0c85-4028-9089-9e9dff5d37b5" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:49:07.736250 master-0 kubenswrapper[6976]: I0318 08:49:07.736204 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 08:49:07.737061 master-0 kubenswrapper[6976]: I0318 08:49:07.737025 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:49:07.739109 master-0 kubenswrapper[6976]: I0318 08:49:07.738866 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 08:49:07.743285 master-0 kubenswrapper[6976]: I0318 08:49:07.743153 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 08:49:07.884018 master-0 kubenswrapper[6976]: I0318 08:49:07.883985 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b75d3625-4131-465d-a8e2-4c42588c7630-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"b75d3625-4131-465d-a8e2-4c42588c7630\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:49:07.884251 master-0 kubenswrapper[6976]: I0318 08:49:07.884237 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b75d3625-4131-465d-a8e2-4c42588c7630-var-lock\") pod \"installer-1-master-0\" (UID: \"b75d3625-4131-465d-a8e2-4c42588c7630\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:49:07.884409 master-0 kubenswrapper[6976]: I0318 08:49:07.884392 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b75d3625-4131-465d-a8e2-4c42588c7630-kube-api-access\") pod \"installer-1-master-0\" (UID: \"b75d3625-4131-465d-a8e2-4c42588c7630\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:49:07.936464 master-0 kubenswrapper[6976]: I0318 08:49:07.935867 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 18 08:49:07.939827 master-0 kubenswrapper[6976]: I0318 08:49:07.939458 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:49:07.939827 master-0 kubenswrapper[6976]: I0318 08:49:07.939649 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 18 08:49:07.941328 master-0 kubenswrapper[6976]: I0318 08:49:07.941238 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 08:49:07.985929 master-0 kubenswrapper[6976]: I0318 08:49:07.985875 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b75d3625-4131-465d-a8e2-4c42588c7630-var-lock\") pod \"installer-1-master-0\" (UID: \"b75d3625-4131-465d-a8e2-4c42588c7630\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:49:07.986117 master-0 kubenswrapper[6976]: I0318 08:49:07.985952 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b75d3625-4131-465d-a8e2-4c42588c7630-kube-api-access\") pod \"installer-1-master-0\" (UID: \"b75d3625-4131-465d-a8e2-4c42588c7630\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:49:07.986117 master-0 kubenswrapper[6976]: I0318 08:49:07.985990 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b75d3625-4131-465d-a8e2-4c42588c7630-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"b75d3625-4131-465d-a8e2-4c42588c7630\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:49:07.986117 master-0 kubenswrapper[6976]: I0318 08:49:07.985990 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b75d3625-4131-465d-a8e2-4c42588c7630-var-lock\") pod \"installer-1-master-0\" (UID: \"b75d3625-4131-465d-a8e2-4c42588c7630\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:49:07.986117 master-0 kubenswrapper[6976]: I0318 08:49:07.986043 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b75d3625-4131-465d-a8e2-4c42588c7630-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"b75d3625-4131-465d-a8e2-4c42588c7630\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:49:08.000064 master-0 kubenswrapper[6976]: I0318 08:49:07.999982 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b75d3625-4131-465d-a8e2-4c42588c7630-kube-api-access\") pod \"installer-1-master-0\" (UID: \"b75d3625-4131-465d-a8e2-4c42588c7630\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:49:08.060190 master-0 kubenswrapper[6976]: I0318 08:49:08.059591 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:49:08.087543 master-0 kubenswrapper[6976]: I0318 08:49:08.087468 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38b830ff-8938-4f21-8977-c29a19c85afb-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"38b830ff-8938-4f21-8977-c29a19c85afb\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:49:08.087747 master-0 kubenswrapper[6976]: I0318 08:49:08.087593 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/38b830ff-8938-4f21-8977-c29a19c85afb-var-lock\") pod \"installer-1-master-0\" (UID: \"38b830ff-8938-4f21-8977-c29a19c85afb\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:49:08.087782 master-0 kubenswrapper[6976]: I0318 08:49:08.087757 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38b830ff-8938-4f21-8977-c29a19c85afb-kube-api-access\") pod \"installer-1-master-0\" (UID: \"38b830ff-8938-4f21-8977-c29a19c85afb\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:49:08.189462 master-0 kubenswrapper[6976]: I0318 08:49:08.189379 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38b830ff-8938-4f21-8977-c29a19c85afb-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"38b830ff-8938-4f21-8977-c29a19c85afb\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:49:08.189651 master-0 kubenswrapper[6976]: I0318 08:49:08.189490 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/38b830ff-8938-4f21-8977-c29a19c85afb-var-lock\") pod \"installer-1-master-0\" (UID: \"38b830ff-8938-4f21-8977-c29a19c85afb\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:49:08.189651 master-0 kubenswrapper[6976]: I0318 08:49:08.189600 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38b830ff-8938-4f21-8977-c29a19c85afb-kube-api-access\") pod \"installer-1-master-0\" (UID: \"38b830ff-8938-4f21-8977-c29a19c85afb\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:49:08.190403 master-0 kubenswrapper[6976]: I0318 08:49:08.190321 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38b830ff-8938-4f21-8977-c29a19c85afb-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"38b830ff-8938-4f21-8977-c29a19c85afb\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:49:08.190511 master-0 kubenswrapper[6976]: I0318 08:49:08.190481 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/38b830ff-8938-4f21-8977-c29a19c85afb-var-lock\") pod \"installer-1-master-0\" (UID: \"38b830ff-8938-4f21-8977-c29a19c85afb\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:49:08.205033 master-0 kubenswrapper[6976]: I0318 08:49:08.204987 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38b830ff-8938-4f21-8977-c29a19c85afb-kube-api-access\") pod \"installer-1-master-0\" (UID: \"38b830ff-8938-4f21-8977-c29a19c85afb\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:49:08.255773 master-0 kubenswrapper[6976]: I0318 08:49:08.255666 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:49:09.841236 master-0 kubenswrapper[6976]: I0318 08:49:09.841059 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 08:49:10.096356 master-0 kubenswrapper[6976]: I0318 08:49:10.094299 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 18 08:49:10.096356 master-0 kubenswrapper[6976]: I0318 08:49:10.095915 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 08:49:10.131649 master-0 kubenswrapper[6976]: I0318 08:49:10.131336 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"38b830ff-8938-4f21-8977-c29a19c85afb","Type":"ContainerStarted","Data":"4eeb3f8508d8d3c4f3d88616faaf160c40c1688d847f4d4385e29255722ded89"} Mar 18 08:49:10.140752 master-0 kubenswrapper[6976]: I0318 08:49:10.137309 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" event={"ID":"ffcdc45e-fa1e-4864-8d5f-b9916719112f","Type":"ContainerStarted","Data":"7ccdf114af56ad644a5625b0d07effd75722aa2a753424f42aeafa8c2599d874"} Mar 18 08:49:10.140752 master-0 kubenswrapper[6976]: I0318 08:49:10.138030 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" Mar 18 08:49:10.140752 master-0 kubenswrapper[6976]: I0318 08:49:10.139346 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p" event={"ID":"d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a","Type":"ContainerStarted","Data":"1fee89add05eef0e29dc986eb54dc0a1793fcdefbdffd46f0c8f0e64efca6fe3"} Mar 18 08:49:10.140752 master-0 kubenswrapper[6976]: I0318 08:49:10.140078 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p" Mar 18 08:49:10.142441 master-0 kubenswrapper[6976]: I0318 08:49:10.142134 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"b75d3625-4131-465d-a8e2-4c42588c7630","Type":"ContainerStarted","Data":"a3d7e4fd3a2cab558b1ebece0211a1e0de8af572fefd420da566dc2b08839acd"} Mar 18 08:49:10.143186 master-0 kubenswrapper[6976]: I0318 08:49:10.143165 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" Mar 18 08:49:10.144299 master-0 kubenswrapper[6976]: I0318 08:49:10.144091 6976 generic.go:334] "Generic (PLEG): container finished" podID="15b6612f-3a51-4a67-a566-8c520f85c6c2" containerID="ff18d78705a1faf4db66557634d82d49694b96e1033b13b70bf5dd3176027008" exitCode=0 Mar 18 08:49:10.144299 master-0 kubenswrapper[6976]: I0318 08:49:10.144181 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" event={"ID":"15b6612f-3a51-4a67-a566-8c520f85c6c2","Type":"ContainerDied","Data":"ff18d78705a1faf4db66557634d82d49694b96e1033b13b70bf5dd3176027008"} Mar 18 08:49:10.149628 master-0 kubenswrapper[6976]: I0318 08:49:10.149306 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"7af34a29-e58b-4b94-9f4d-ea5801a1851e","Type":"ContainerStarted","Data":"0ec190737f45c7ce3def154f767ca0018ddba22f275db8cb58074a094138c4de"} Mar 18 08:49:10.153946 master-0 kubenswrapper[6976]: I0318 08:49:10.153906 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pj485" event={"ID":"b2588f5c-327c-49cc-8cfb-0cce1ad758d5","Type":"ContainerStarted","Data":"6b35b6a9d75f027961b464f72541d73cf27309903154a4d40a69005d1d32379e"} Mar 18 08:49:10.193418 master-0 kubenswrapper[6976]: I0318 08:49:10.193350 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" podStartSLOduration=5.992444691 podStartE2EDuration="12.193332584s" podCreationTimestamp="2026-03-18 08:48:58 +0000 UTC" firstStartedPulling="2026-03-18 08:49:03.435386327 +0000 UTC m=+43.020987922" lastFinishedPulling="2026-03-18 08:49:09.63627422 +0000 UTC m=+49.221875815" observedRunningTime="2026-03-18 08:49:10.167003713 +0000 UTC m=+49.752605308" watchObservedRunningTime="2026-03-18 08:49:10.193332584 +0000 UTC m=+49.778934179" Mar 18 08:49:10.213681 master-0 kubenswrapper[6976]: I0318 08:49:10.210136 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p" podStartSLOduration=6.071343241 podStartE2EDuration="12.21011719s" podCreationTimestamp="2026-03-18 08:48:58 +0000 UTC" firstStartedPulling="2026-03-18 08:49:03.496230606 +0000 UTC m=+43.081832191" lastFinishedPulling="2026-03-18 08:49:09.635004545 +0000 UTC m=+49.220606140" observedRunningTime="2026-03-18 08:49:10.208453134 +0000 UTC m=+49.794054729" watchObservedRunningTime="2026-03-18 08:49:10.21011719 +0000 UTC m=+49.795718785" Mar 18 08:49:10.292608 master-0 kubenswrapper[6976]: I0318 08:49:10.290309 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p" Mar 18 08:49:11.163474 master-0 kubenswrapper[6976]: I0318 08:49:11.163421 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"b75d3625-4131-465d-a8e2-4c42588c7630","Type":"ContainerStarted","Data":"f10ab16270a7803054be2d271744f71e45d5e3fab77e472706ee3fb055b353ea"} Mar 18 08:49:11.170474 master-0 kubenswrapper[6976]: I0318 08:49:11.170437 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" event={"ID":"15b6612f-3a51-4a67-a566-8c520f85c6c2","Type":"ContainerStarted","Data":"8a920a6ae58c09e18579d0836ec646444776f09a307da458f05666e64fa41e7d"} Mar 18 08:49:11.173812 master-0 kubenswrapper[6976]: I0318 08:49:11.173780 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"7af34a29-e58b-4b94-9f4d-ea5801a1851e","Type":"ContainerStarted","Data":"8b62085d2da2501562754f4f03b2bb023b7d4c3e25c87634e88ab2792eed9b1c"} Mar 18 08:49:11.179772 master-0 kubenswrapper[6976]: I0318 08:49:11.179723 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pj485" event={"ID":"b2588f5c-327c-49cc-8cfb-0cce1ad758d5","Type":"ContainerStarted","Data":"5f5c2d04c2453f8003d25724f6a7a89e36168f977bfc1028f0c08ceea97001e7"} Mar 18 08:49:11.182751 master-0 kubenswrapper[6976]: I0318 08:49:11.182724 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"38b830ff-8938-4f21-8977-c29a19c85afb","Type":"ContainerStarted","Data":"b28f4dc9cd44e68014d536f9ea9c8387108c84bc538f43d2e6bb244d9d074b11"} Mar 18 08:49:11.184153 master-0 kubenswrapper[6976]: I0318 08:49:11.184111 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=4.184099241 podStartE2EDuration="4.184099241s" podCreationTimestamp="2026-03-18 08:49:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:11.183039211 +0000 UTC m=+50.768640806" watchObservedRunningTime="2026-03-18 08:49:11.184099241 +0000 UTC m=+50.769700836" Mar 18 08:49:11.235082 master-0 kubenswrapper[6976]: I0318 08:49:11.232886 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" podStartSLOduration=12.991952541 podStartE2EDuration="19.232863465s" podCreationTimestamp="2026-03-18 08:48:52 +0000 UTC" firstStartedPulling="2026-03-18 08:49:03.418474347 +0000 UTC m=+43.004075942" lastFinishedPulling="2026-03-18 08:49:09.659385261 +0000 UTC m=+49.244986866" observedRunningTime="2026-03-18 08:49:11.23197492 +0000 UTC m=+50.817576515" watchObservedRunningTime="2026-03-18 08:49:11.232863465 +0000 UTC m=+50.818465060" Mar 18 08:49:11.267589 master-0 kubenswrapper[6976]: I0318 08:49:11.263266 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=5.263250318 podStartE2EDuration="5.263250318s" podCreationTimestamp="2026-03-18 08:49:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:11.262796436 +0000 UTC m=+50.848398031" watchObservedRunningTime="2026-03-18 08:49:11.263250318 +0000 UTC m=+50.848851903" Mar 18 08:49:11.292586 master-0 kubenswrapper[6976]: I0318 08:49:11.291966 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-pj485" podStartSLOduration=2.675667625 podStartE2EDuration="7.291948835s" podCreationTimestamp="2026-03-18 08:49:04 +0000 UTC" firstStartedPulling="2026-03-18 08:49:05.018740965 +0000 UTC m=+44.604342560" lastFinishedPulling="2026-03-18 08:49:09.635022175 +0000 UTC m=+49.220623770" observedRunningTime="2026-03-18 08:49:11.289641911 +0000 UTC m=+50.875243516" watchObservedRunningTime="2026-03-18 08:49:11.291948835 +0000 UTC m=+50.877550430" Mar 18 08:49:12.187518 master-0 kubenswrapper[6976]: I0318 08:49:12.187473 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-pj485" Mar 18 08:49:12.358698 master-0 kubenswrapper[6976]: I0318 08:49:12.358663 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:49:12.363592 master-0 kubenswrapper[6976]: I0318 08:49:12.363546 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 08:49:12.386127 master-0 kubenswrapper[6976]: I0318 08:49:12.386070 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=5.38605138 podStartE2EDuration="5.38605138s" podCreationTimestamp="2026-03-18 08:49:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:11.333480028 +0000 UTC m=+50.919081623" watchObservedRunningTime="2026-03-18 08:49:12.38605138 +0000 UTC m=+51.971652975" Mar 18 08:49:15.423526 master-0 kubenswrapper[6976]: I0318 08:49:15.423436 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 08:49:15.424194 master-0 kubenswrapper[6976]: I0318 08:49:15.423663 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="7af34a29-e58b-4b94-9f4d-ea5801a1851e" containerName="installer" containerID="cri-o://8b62085d2da2501562754f4f03b2bb023b7d4c3e25c87634e88ab2792eed9b1c" gracePeriod=30 Mar 18 08:49:15.492149 master-0 kubenswrapper[6976]: I0318 08:49:15.492042 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:49:15.492394 master-0 kubenswrapper[6976]: I0318 08:49:15.492309 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:49:15.502366 master-0 kubenswrapper[6976]: I0318 08:49:15.502319 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:49:15.924021 master-0 kubenswrapper[6976]: I0318 08:49:15.923982 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_7af34a29-e58b-4b94-9f4d-ea5801a1851e/installer/0.log" Mar 18 08:49:15.924202 master-0 kubenswrapper[6976]: I0318 08:49:15.924043 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:16.020026 master-0 kubenswrapper[6976]: I0318 08:49:16.019962 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7af34a29-e58b-4b94-9f4d-ea5801a1851e-var-lock\") pod \"7af34a29-e58b-4b94-9f4d-ea5801a1851e\" (UID: \"7af34a29-e58b-4b94-9f4d-ea5801a1851e\") " Mar 18 08:49:16.020026 master-0 kubenswrapper[6976]: I0318 08:49:16.020012 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7af34a29-e58b-4b94-9f4d-ea5801a1851e-kubelet-dir\") pod \"7af34a29-e58b-4b94-9f4d-ea5801a1851e\" (UID: \"7af34a29-e58b-4b94-9f4d-ea5801a1851e\") " Mar 18 08:49:16.020026 master-0 kubenswrapper[6976]: I0318 08:49:16.020042 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7af34a29-e58b-4b94-9f4d-ea5801a1851e-kube-api-access\") pod \"7af34a29-e58b-4b94-9f4d-ea5801a1851e\" (UID: \"7af34a29-e58b-4b94-9f4d-ea5801a1851e\") " Mar 18 08:49:16.020470 master-0 kubenswrapper[6976]: I0318 08:49:16.020421 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7af34a29-e58b-4b94-9f4d-ea5801a1851e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7af34a29-e58b-4b94-9f4d-ea5801a1851e" (UID: "7af34a29-e58b-4b94-9f4d-ea5801a1851e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:49:16.020667 master-0 kubenswrapper[6976]: I0318 08:49:16.020470 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7af34a29-e58b-4b94-9f4d-ea5801a1851e-var-lock" (OuterVolumeSpecName: "var-lock") pod "7af34a29-e58b-4b94-9f4d-ea5801a1851e" (UID: "7af34a29-e58b-4b94-9f4d-ea5801a1851e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:49:16.024016 master-0 kubenswrapper[6976]: I0318 08:49:16.023978 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7af34a29-e58b-4b94-9f4d-ea5801a1851e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7af34a29-e58b-4b94-9f4d-ea5801a1851e" (UID: "7af34a29-e58b-4b94-9f4d-ea5801a1851e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:49:16.121187 master-0 kubenswrapper[6976]: I0318 08:49:16.121123 6976 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7af34a29-e58b-4b94-9f4d-ea5801a1851e-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:16.121187 master-0 kubenswrapper[6976]: I0318 08:49:16.121153 6976 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7af34a29-e58b-4b94-9f4d-ea5801a1851e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:16.121187 master-0 kubenswrapper[6976]: I0318 08:49:16.121165 6976 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7af34a29-e58b-4b94-9f4d-ea5801a1851e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:16.204948 master-0 kubenswrapper[6976]: I0318 08:49:16.204825 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_7af34a29-e58b-4b94-9f4d-ea5801a1851e/installer/0.log" Mar 18 08:49:16.204948 master-0 kubenswrapper[6976]: I0318 08:49:16.204875 6976 generic.go:334] "Generic (PLEG): container finished" podID="7af34a29-e58b-4b94-9f4d-ea5801a1851e" containerID="8b62085d2da2501562754f4f03b2bb023b7d4c3e25c87634e88ab2792eed9b1c" exitCode=1 Mar 18 08:49:16.205215 master-0 kubenswrapper[6976]: I0318 08:49:16.204950 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 18 08:49:16.205215 master-0 kubenswrapper[6976]: I0318 08:49:16.204949 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"7af34a29-e58b-4b94-9f4d-ea5801a1851e","Type":"ContainerDied","Data":"8b62085d2da2501562754f4f03b2bb023b7d4c3e25c87634e88ab2792eed9b1c"} Mar 18 08:49:16.205215 master-0 kubenswrapper[6976]: I0318 08:49:16.205011 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"7af34a29-e58b-4b94-9f4d-ea5801a1851e","Type":"ContainerDied","Data":"0ec190737f45c7ce3def154f767ca0018ddba22f275db8cb58074a094138c4de"} Mar 18 08:49:16.205215 master-0 kubenswrapper[6976]: I0318 08:49:16.205053 6976 scope.go:117] "RemoveContainer" containerID="8b62085d2da2501562754f4f03b2bb023b7d4c3e25c87634e88ab2792eed9b1c" Mar 18 08:49:16.209775 master-0 kubenswrapper[6976]: I0318 08:49:16.209682 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 08:49:16.222843 master-0 kubenswrapper[6976]: I0318 08:49:16.222796 6976 scope.go:117] "RemoveContainer" containerID="8b62085d2da2501562754f4f03b2bb023b7d4c3e25c87634e88ab2792eed9b1c" Mar 18 08:49:16.223299 master-0 kubenswrapper[6976]: E0318 08:49:16.223250 6976 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b62085d2da2501562754f4f03b2bb023b7d4c3e25c87634e88ab2792eed9b1c\": container with ID starting with 8b62085d2da2501562754f4f03b2bb023b7d4c3e25c87634e88ab2792eed9b1c not found: ID does not exist" containerID="8b62085d2da2501562754f4f03b2bb023b7d4c3e25c87634e88ab2792eed9b1c" Mar 18 08:49:16.223386 master-0 kubenswrapper[6976]: I0318 08:49:16.223313 6976 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b62085d2da2501562754f4f03b2bb023b7d4c3e25c87634e88ab2792eed9b1c"} err="failed to get container status \"8b62085d2da2501562754f4f03b2bb023b7d4c3e25c87634e88ab2792eed9b1c\": rpc error: code = NotFound desc = could not find container \"8b62085d2da2501562754f4f03b2bb023b7d4c3e25c87634e88ab2792eed9b1c\": container with ID starting with 8b62085d2da2501562754f4f03b2bb023b7d4c3e25c87634e88ab2792eed9b1c not found: ID does not exist" Mar 18 08:49:16.263611 master-0 kubenswrapper[6976]: I0318 08:49:16.261625 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 08:49:16.277230 master-0 kubenswrapper[6976]: I0318 08:49:16.275030 6976 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 18 08:49:16.603622 master-0 kubenswrapper[6976]: I0318 08:49:16.603550 6976 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7af34a29-e58b-4b94-9f4d-ea5801a1851e" path="/var/lib/kubelet/pods/7af34a29-e58b-4b94-9f4d-ea5801a1851e/volumes" Mar 18 08:49:17.575096 master-0 kubenswrapper[6976]: I0318 08:49:17.575020 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5dbd749c-2j5zn"] Mar 18 08:49:17.575364 master-0 kubenswrapper[6976]: I0318 08:49:17.575329 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" podUID="ffcdc45e-fa1e-4864-8d5f-b9916719112f" containerName="controller-manager" containerID="cri-o://7ccdf114af56ad644a5625b0d07effd75722aa2a753424f42aeafa8c2599d874" gracePeriod=30 Mar 18 08:49:17.584882 master-0 kubenswrapper[6976]: I0318 08:49:17.584837 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p"] Mar 18 08:49:17.585088 master-0 kubenswrapper[6976]: I0318 08:49:17.585013 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p" podUID="d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a" containerName="route-controller-manager" containerID="cri-o://1fee89add05eef0e29dc986eb54dc0a1793fcdefbdffd46f0c8f0e64efca6fe3" gracePeriod=30 Mar 18 08:49:18.034360 master-0 kubenswrapper[6976]: I0318 08:49:18.034338 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p" Mar 18 08:49:18.064511 master-0 kubenswrapper[6976]: I0318 08:49:18.064452 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" Mar 18 08:49:18.158028 master-0 kubenswrapper[6976]: I0318 08:49:18.157972 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ffcdc45e-fa1e-4864-8d5f-b9916719112f-proxy-ca-bundles\") pod \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\" (UID: \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\") " Mar 18 08:49:18.158028 master-0 kubenswrapper[6976]: I0318 08:49:18.158013 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a-config\") pod \"d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a\" (UID: \"d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a\") " Mar 18 08:49:18.158305 master-0 kubenswrapper[6976]: I0318 08:49:18.158050 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfw4w\" (UniqueName: \"kubernetes.io/projected/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a-kube-api-access-wfw4w\") pod \"d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a\" (UID: \"d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a\") " Mar 18 08:49:18.158305 master-0 kubenswrapper[6976]: I0318 08:49:18.158086 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a-serving-cert\") pod \"d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a\" (UID: \"d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a\") " Mar 18 08:49:18.158305 master-0 kubenswrapper[6976]: I0318 08:49:18.158113 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a-client-ca\") pod \"d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a\" (UID: \"d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a\") " Mar 18 08:49:18.158305 master-0 kubenswrapper[6976]: I0318 08:49:18.158133 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffcdc45e-fa1e-4864-8d5f-b9916719112f-client-ca\") pod \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\" (UID: \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\") " Mar 18 08:49:18.158305 master-0 kubenswrapper[6976]: I0318 08:49:18.158155 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffcdc45e-fa1e-4864-8d5f-b9916719112f-config\") pod \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\" (UID: \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\") " Mar 18 08:49:18.158305 master-0 kubenswrapper[6976]: I0318 08:49:18.158175 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfptn\" (UniqueName: \"kubernetes.io/projected/ffcdc45e-fa1e-4864-8d5f-b9916719112f-kube-api-access-gfptn\") pod \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\" (UID: \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\") " Mar 18 08:49:18.158305 master-0 kubenswrapper[6976]: I0318 08:49:18.158194 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffcdc45e-fa1e-4864-8d5f-b9916719112f-serving-cert\") pod \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\" (UID: \"ffcdc45e-fa1e-4864-8d5f-b9916719112f\") " Mar 18 08:49:18.159618 master-0 kubenswrapper[6976]: I0318 08:49:18.159273 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a-config" (OuterVolumeSpecName: "config") pod "d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a" (UID: "d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:49:18.159618 master-0 kubenswrapper[6976]: I0318 08:49:18.159474 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffcdc45e-fa1e-4864-8d5f-b9916719112f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ffcdc45e-fa1e-4864-8d5f-b9916719112f" (UID: "ffcdc45e-fa1e-4864-8d5f-b9916719112f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:49:18.159618 master-0 kubenswrapper[6976]: I0318 08:49:18.159486 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a-client-ca" (OuterVolumeSpecName: "client-ca") pod "d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a" (UID: "d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:49:18.159618 master-0 kubenswrapper[6976]: I0318 08:49:18.159538 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffcdc45e-fa1e-4864-8d5f-b9916719112f-client-ca" (OuterVolumeSpecName: "client-ca") pod "ffcdc45e-fa1e-4864-8d5f-b9916719112f" (UID: "ffcdc45e-fa1e-4864-8d5f-b9916719112f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:49:18.159618 master-0 kubenswrapper[6976]: I0318 08:49:18.159556 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffcdc45e-fa1e-4864-8d5f-b9916719112f-config" (OuterVolumeSpecName: "config") pod "ffcdc45e-fa1e-4864-8d5f-b9916719112f" (UID: "ffcdc45e-fa1e-4864-8d5f-b9916719112f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:49:18.161158 master-0 kubenswrapper[6976]: I0318 08:49:18.161113 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffcdc45e-fa1e-4864-8d5f-b9916719112f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ffcdc45e-fa1e-4864-8d5f-b9916719112f" (UID: "ffcdc45e-fa1e-4864-8d5f-b9916719112f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:49:18.162330 master-0 kubenswrapper[6976]: I0318 08:49:18.162290 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a" (UID: "d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:49:18.162426 master-0 kubenswrapper[6976]: I0318 08:49:18.162354 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffcdc45e-fa1e-4864-8d5f-b9916719112f-kube-api-access-gfptn" (OuterVolumeSpecName: "kube-api-access-gfptn") pod "ffcdc45e-fa1e-4864-8d5f-b9916719112f" (UID: "ffcdc45e-fa1e-4864-8d5f-b9916719112f"). InnerVolumeSpecName "kube-api-access-gfptn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:49:18.162587 master-0 kubenswrapper[6976]: I0318 08:49:18.162537 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a-kube-api-access-wfw4w" (OuterVolumeSpecName: "kube-api-access-wfw4w") pod "d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a" (UID: "d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a"). InnerVolumeSpecName "kube-api-access-wfw4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:49:18.227002 master-0 kubenswrapper[6976]: I0318 08:49:18.226937 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 18 08:49:18.227200 master-0 kubenswrapper[6976]: E0318 08:49:18.227162 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7af34a29-e58b-4b94-9f4d-ea5801a1851e" containerName="installer" Mar 18 08:49:18.227200 master-0 kubenswrapper[6976]: I0318 08:49:18.227180 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="7af34a29-e58b-4b94-9f4d-ea5801a1851e" containerName="installer" Mar 18 08:49:18.227263 master-0 kubenswrapper[6976]: E0318 08:49:18.227200 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a" containerName="route-controller-manager" Mar 18 08:49:18.227263 master-0 kubenswrapper[6976]: I0318 08:49:18.227213 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a" containerName="route-controller-manager" Mar 18 08:49:18.227263 master-0 kubenswrapper[6976]: E0318 08:49:18.227235 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffcdc45e-fa1e-4864-8d5f-b9916719112f" containerName="controller-manager" Mar 18 08:49:18.227263 master-0 kubenswrapper[6976]: I0318 08:49:18.227247 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcdc45e-fa1e-4864-8d5f-b9916719112f" containerName="controller-manager" Mar 18 08:49:18.227371 master-0 kubenswrapper[6976]: I0318 08:49:18.227358 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffcdc45e-fa1e-4864-8d5f-b9916719112f" containerName="controller-manager" Mar 18 08:49:18.227400 master-0 kubenswrapper[6976]: I0318 08:49:18.227377 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a" containerName="route-controller-manager" Mar 18 08:49:18.227429 master-0 kubenswrapper[6976]: I0318 08:49:18.227405 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="7af34a29-e58b-4b94-9f4d-ea5801a1851e" containerName="installer" Mar 18 08:49:18.227903 master-0 kubenswrapper[6976]: I0318 08:49:18.227868 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:49:18.228255 master-0 kubenswrapper[6976]: I0318 08:49:18.228203 6976 generic.go:334] "Generic (PLEG): container finished" podID="ffcdc45e-fa1e-4864-8d5f-b9916719112f" containerID="7ccdf114af56ad644a5625b0d07effd75722aa2a753424f42aeafa8c2599d874" exitCode=0 Mar 18 08:49:18.228350 master-0 kubenswrapper[6976]: I0318 08:49:18.228319 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" event={"ID":"ffcdc45e-fa1e-4864-8d5f-b9916719112f","Type":"ContainerDied","Data":"7ccdf114af56ad644a5625b0d07effd75722aa2a753424f42aeafa8c2599d874"} Mar 18 08:49:18.228388 master-0 kubenswrapper[6976]: I0318 08:49:18.228361 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" event={"ID":"ffcdc45e-fa1e-4864-8d5f-b9916719112f","Type":"ContainerDied","Data":"63ae45db776e3ed942737171110f734a8575d6642281b093e32333f1afd4c378"} Mar 18 08:49:18.228388 master-0 kubenswrapper[6976]: I0318 08:49:18.228383 6976 scope.go:117] "RemoveContainer" containerID="7ccdf114af56ad644a5625b0d07effd75722aa2a753424f42aeafa8c2599d874" Mar 18 08:49:18.228526 master-0 kubenswrapper[6976]: I0318 08:49:18.228501 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dbd749c-2j5zn" Mar 18 08:49:18.234522 master-0 kubenswrapper[6976]: I0318 08:49:18.233643 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-jw7t8" Mar 18 08:49:18.235246 master-0 kubenswrapper[6976]: I0318 08:49:18.235201 6976 generic.go:334] "Generic (PLEG): container finished" podID="d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a" containerID="1fee89add05eef0e29dc986eb54dc0a1793fcdefbdffd46f0c8f0e64efca6fe3" exitCode=0 Mar 18 08:49:18.239676 master-0 kubenswrapper[6976]: I0318 08:49:18.236062 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p" event={"ID":"d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a","Type":"ContainerDied","Data":"1fee89add05eef0e29dc986eb54dc0a1793fcdefbdffd46f0c8f0e64efca6fe3"} Mar 18 08:49:18.239676 master-0 kubenswrapper[6976]: I0318 08:49:18.236112 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p" event={"ID":"d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a","Type":"ContainerDied","Data":"4e0b2c850f8305c249d90e52b80380962f9cb2f5c3d5e9878c440f1b035def58"} Mar 18 08:49:18.239676 master-0 kubenswrapper[6976]: I0318 08:49:18.236142 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p" Mar 18 08:49:18.242708 master-0 kubenswrapper[6976]: I0318 08:49:18.242648 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 18 08:49:18.254870 master-0 kubenswrapper[6976]: I0318 08:49:18.254815 6976 scope.go:117] "RemoveContainer" containerID="7ccdf114af56ad644a5625b0d07effd75722aa2a753424f42aeafa8c2599d874" Mar 18 08:49:18.255720 master-0 kubenswrapper[6976]: E0318 08:49:18.255661 6976 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ccdf114af56ad644a5625b0d07effd75722aa2a753424f42aeafa8c2599d874\": container with ID starting with 7ccdf114af56ad644a5625b0d07effd75722aa2a753424f42aeafa8c2599d874 not found: ID does not exist" containerID="7ccdf114af56ad644a5625b0d07effd75722aa2a753424f42aeafa8c2599d874" Mar 18 08:49:18.255720 master-0 kubenswrapper[6976]: I0318 08:49:18.255700 6976 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ccdf114af56ad644a5625b0d07effd75722aa2a753424f42aeafa8c2599d874"} err="failed to get container status \"7ccdf114af56ad644a5625b0d07effd75722aa2a753424f42aeafa8c2599d874\": rpc error: code = NotFound desc = could not find container \"7ccdf114af56ad644a5625b0d07effd75722aa2a753424f42aeafa8c2599d874\": container with ID starting with 7ccdf114af56ad644a5625b0d07effd75722aa2a753424f42aeafa8c2599d874 not found: ID does not exist" Mar 18 08:49:18.255889 master-0 kubenswrapper[6976]: I0318 08:49:18.255728 6976 scope.go:117] "RemoveContainer" containerID="1fee89add05eef0e29dc986eb54dc0a1793fcdefbdffd46f0c8f0e64efca6fe3" Mar 18 08:49:18.259860 master-0 kubenswrapper[6976]: I0318 08:49:18.259817 6976 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:18.259860 master-0 kubenswrapper[6976]: I0318 08:49:18.259847 6976 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:18.259860 master-0 kubenswrapper[6976]: I0318 08:49:18.259856 6976 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ffcdc45e-fa1e-4864-8d5f-b9916719112f-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:18.259860 master-0 kubenswrapper[6976]: I0318 08:49:18.259865 6976 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffcdc45e-fa1e-4864-8d5f-b9916719112f-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:18.260140 master-0 kubenswrapper[6976]: I0318 08:49:18.259893 6976 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfptn\" (UniqueName: \"kubernetes.io/projected/ffcdc45e-fa1e-4864-8d5f-b9916719112f-kube-api-access-gfptn\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:18.260140 master-0 kubenswrapper[6976]: I0318 08:49:18.259902 6976 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffcdc45e-fa1e-4864-8d5f-b9916719112f-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:18.260140 master-0 kubenswrapper[6976]: I0318 08:49:18.259911 6976 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ffcdc45e-fa1e-4864-8d5f-b9916719112f-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:18.260140 master-0 kubenswrapper[6976]: I0318 08:49:18.259920 6976 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:18.260140 master-0 kubenswrapper[6976]: I0318 08:49:18.259930 6976 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfw4w\" (UniqueName: \"kubernetes.io/projected/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a-kube-api-access-wfw4w\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:18.275965 master-0 kubenswrapper[6976]: I0318 08:49:18.275921 6976 scope.go:117] "RemoveContainer" containerID="1fee89add05eef0e29dc986eb54dc0a1793fcdefbdffd46f0c8f0e64efca6fe3" Mar 18 08:49:18.276257 master-0 kubenswrapper[6976]: E0318 08:49:18.276219 6976 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fee89add05eef0e29dc986eb54dc0a1793fcdefbdffd46f0c8f0e64efca6fe3\": container with ID starting with 1fee89add05eef0e29dc986eb54dc0a1793fcdefbdffd46f0c8f0e64efca6fe3 not found: ID does not exist" containerID="1fee89add05eef0e29dc986eb54dc0a1793fcdefbdffd46f0c8f0e64efca6fe3" Mar 18 08:49:18.276344 master-0 kubenswrapper[6976]: I0318 08:49:18.276248 6976 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fee89add05eef0e29dc986eb54dc0a1793fcdefbdffd46f0c8f0e64efca6fe3"} err="failed to get container status \"1fee89add05eef0e29dc986eb54dc0a1793fcdefbdffd46f0c8f0e64efca6fe3\": rpc error: code = NotFound desc = could not find container \"1fee89add05eef0e29dc986eb54dc0a1793fcdefbdffd46f0c8f0e64efca6fe3\": container with ID starting with 1fee89add05eef0e29dc986eb54dc0a1793fcdefbdffd46f0c8f0e64efca6fe3 not found: ID does not exist" Mar 18 08:49:18.286665 master-0 kubenswrapper[6976]: I0318 08:49:18.286614 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5dbd749c-2j5zn"] Mar 18 08:49:18.292335 master-0 kubenswrapper[6976]: I0318 08:49:18.292263 6976 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5dbd749c-2j5zn"] Mar 18 08:49:18.306600 master-0 kubenswrapper[6976]: I0318 08:49:18.306485 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p"] Mar 18 08:49:18.308206 master-0 kubenswrapper[6976]: I0318 08:49:18.308126 6976 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c95d4578f-2qx7p"] Mar 18 08:49:18.362619 master-0 kubenswrapper[6976]: I0318 08:49:18.360712 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3253d87f-ae48-42cf-950f-f508a9b82d0d-var-lock\") pod \"installer-3-master-0\" (UID: \"3253d87f-ae48-42cf-950f-f508a9b82d0d\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:49:18.362619 master-0 kubenswrapper[6976]: I0318 08:49:18.360894 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3253d87f-ae48-42cf-950f-f508a9b82d0d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"3253d87f-ae48-42cf-950f-f508a9b82d0d\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:49:18.362619 master-0 kubenswrapper[6976]: I0318 08:49:18.360930 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3253d87f-ae48-42cf-950f-f508a9b82d0d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"3253d87f-ae48-42cf-950f-f508a9b82d0d\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:49:18.462371 master-0 kubenswrapper[6976]: I0318 08:49:18.462312 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3253d87f-ae48-42cf-950f-f508a9b82d0d-var-lock\") pod \"installer-3-master-0\" (UID: \"3253d87f-ae48-42cf-950f-f508a9b82d0d\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:49:18.462633 master-0 kubenswrapper[6976]: I0318 08:49:18.462428 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3253d87f-ae48-42cf-950f-f508a9b82d0d-var-lock\") pod \"installer-3-master-0\" (UID: \"3253d87f-ae48-42cf-950f-f508a9b82d0d\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:49:18.462633 master-0 kubenswrapper[6976]: I0318 08:49:18.462547 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3253d87f-ae48-42cf-950f-f508a9b82d0d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"3253d87f-ae48-42cf-950f-f508a9b82d0d\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:49:18.462633 master-0 kubenswrapper[6976]: I0318 08:49:18.462611 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3253d87f-ae48-42cf-950f-f508a9b82d0d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"3253d87f-ae48-42cf-950f-f508a9b82d0d\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:49:18.462757 master-0 kubenswrapper[6976]: I0318 08:49:18.462670 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3253d87f-ae48-42cf-950f-f508a9b82d0d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"3253d87f-ae48-42cf-950f-f508a9b82d0d\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:49:18.478357 master-0 kubenswrapper[6976]: I0318 08:49:18.478300 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3253d87f-ae48-42cf-950f-f508a9b82d0d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"3253d87f-ae48-42cf-950f-f508a9b82d0d\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:49:18.562389 master-0 kubenswrapper[6976]: I0318 08:49:18.562311 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:49:18.610303 master-0 kubenswrapper[6976]: I0318 08:49:18.610255 6976 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a" path="/var/lib/kubelet/pods/d2b4b463-bdd3-4624-9aa2-8ed7e7f7529a/volumes" Mar 18 08:49:18.616716 master-0 kubenswrapper[6976]: I0318 08:49:18.611434 6976 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffcdc45e-fa1e-4864-8d5f-b9916719112f" path="/var/lib/kubelet/pods/ffcdc45e-fa1e-4864-8d5f-b9916719112f/volumes" Mar 18 08:49:18.722479 master-0 kubenswrapper[6976]: I0318 08:49:18.722364 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr"] Mar 18 08:49:18.722733 master-0 kubenswrapper[6976]: I0318 08:49:18.722688 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" podUID="85d361a2-3f83-4857-b96e-3e98fcf33463" containerName="cluster-version-operator" containerID="cri-o://ced431e20627e427a21cb8fd42c6730cdc4485d805ff7ffe44cd7d69e197e11e" gracePeriod=130 Mar 18 08:49:18.872090 master-0 kubenswrapper[6976]: I0318 08:49:18.872054 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:49:18.967832 master-0 kubenswrapper[6976]: I0318 08:49:18.967776 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85d361a2-3f83-4857-b96e-3e98fcf33463-service-ca\") pod \"85d361a2-3f83-4857-b96e-3e98fcf33463\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " Mar 18 08:49:18.967832 master-0 kubenswrapper[6976]: I0318 08:49:18.967815 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85d361a2-3f83-4857-b96e-3e98fcf33463-etc-cvo-updatepayloads\") pod \"85d361a2-3f83-4857-b96e-3e98fcf33463\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " Mar 18 08:49:18.967832 master-0 kubenswrapper[6976]: I0318 08:49:18.967834 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85d361a2-3f83-4857-b96e-3e98fcf33463-etc-ssl-certs\") pod \"85d361a2-3f83-4857-b96e-3e98fcf33463\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " Mar 18 08:49:18.968180 master-0 kubenswrapper[6976]: I0318 08:49:18.967860 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert\") pod \"85d361a2-3f83-4857-b96e-3e98fcf33463\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " Mar 18 08:49:18.968180 master-0 kubenswrapper[6976]: I0318 08:49:18.967899 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85d361a2-3f83-4857-b96e-3e98fcf33463-kube-api-access\") pod \"85d361a2-3f83-4857-b96e-3e98fcf33463\" (UID: \"85d361a2-3f83-4857-b96e-3e98fcf33463\") " Mar 18 08:49:18.968180 master-0 kubenswrapper[6976]: I0318 08:49:18.967981 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85d361a2-3f83-4857-b96e-3e98fcf33463-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "85d361a2-3f83-4857-b96e-3e98fcf33463" (UID: "85d361a2-3f83-4857-b96e-3e98fcf33463"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:49:18.968180 master-0 kubenswrapper[6976]: I0318 08:49:18.968010 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85d361a2-3f83-4857-b96e-3e98fcf33463-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "85d361a2-3f83-4857-b96e-3e98fcf33463" (UID: "85d361a2-3f83-4857-b96e-3e98fcf33463"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:49:18.968433 master-0 kubenswrapper[6976]: I0318 08:49:18.968391 6976 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85d361a2-3f83-4857-b96e-3e98fcf33463-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:18.968433 master-0 kubenswrapper[6976]: I0318 08:49:18.968423 6976 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85d361a2-3f83-4857-b96e-3e98fcf33463-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:18.968643 master-0 kubenswrapper[6976]: I0318 08:49:18.968484 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85d361a2-3f83-4857-b96e-3e98fcf33463-service-ca" (OuterVolumeSpecName: "service-ca") pod "85d361a2-3f83-4857-b96e-3e98fcf33463" (UID: "85d361a2-3f83-4857-b96e-3e98fcf33463"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:49:18.971277 master-0 kubenswrapper[6976]: I0318 08:49:18.971017 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "85d361a2-3f83-4857-b96e-3e98fcf33463" (UID: "85d361a2-3f83-4857-b96e-3e98fcf33463"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:49:18.973302 master-0 kubenswrapper[6976]: I0318 08:49:18.972842 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85d361a2-3f83-4857-b96e-3e98fcf33463-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "85d361a2-3f83-4857-b96e-3e98fcf33463" (UID: "85d361a2-3f83-4857-b96e-3e98fcf33463"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:49:18.973302 master-0 kubenswrapper[6976]: I0318 08:49:18.972960 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 18 08:49:19.071740 master-0 kubenswrapper[6976]: I0318 08:49:19.071283 6976 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85d361a2-3f83-4857-b96e-3e98fcf33463-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:19.071740 master-0 kubenswrapper[6976]: I0318 08:49:19.071322 6976 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85d361a2-3f83-4857-b96e-3e98fcf33463-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:19.071740 master-0 kubenswrapper[6976]: I0318 08:49:19.071337 6976 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85d361a2-3f83-4857-b96e-3e98fcf33463-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:19.221955 master-0 kubenswrapper[6976]: I0318 08:49:19.221882 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c945f8f5b-967lx"] Mar 18 08:49:19.222288 master-0 kubenswrapper[6976]: E0318 08:49:19.222248 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85d361a2-3f83-4857-b96e-3e98fcf33463" containerName="cluster-version-operator" Mar 18 08:49:19.222288 master-0 kubenswrapper[6976]: I0318 08:49:19.222279 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="85d361a2-3f83-4857-b96e-3e98fcf33463" containerName="cluster-version-operator" Mar 18 08:49:19.222471 master-0 kubenswrapper[6976]: I0318 08:49:19.222436 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="85d361a2-3f83-4857-b96e-3e98fcf33463" containerName="cluster-version-operator" Mar 18 08:49:19.223017 master-0 kubenswrapper[6976]: I0318 08:49:19.222978 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" Mar 18 08:49:19.224510 master-0 kubenswrapper[6976]: I0318 08:49:19.224414 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg"] Mar 18 08:49:19.225032 master-0 kubenswrapper[6976]: I0318 08:49:19.224991 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 08:49:19.225110 master-0 kubenswrapper[6976]: I0318 08:49:19.225059 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg" Mar 18 08:49:19.225160 master-0 kubenswrapper[6976]: I0318 08:49:19.225101 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 08:49:19.227889 master-0 kubenswrapper[6976]: I0318 08:49:19.227734 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 08:49:19.227889 master-0 kubenswrapper[6976]: I0318 08:49:19.227804 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 08:49:19.228469 master-0 kubenswrapper[6976]: I0318 08:49:19.228123 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-rwvl6" Mar 18 08:49:19.228469 master-0 kubenswrapper[6976]: I0318 08:49:19.228192 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 08:49:19.228550 master-0 kubenswrapper[6976]: I0318 08:49:19.228526 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 08:49:19.228550 master-0 kubenswrapper[6976]: I0318 08:49:19.228527 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 08:49:19.228693 master-0 kubenswrapper[6976]: I0318 08:49:19.228658 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 08:49:19.228748 master-0 kubenswrapper[6976]: I0318 08:49:19.228716 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 08:49:19.228966 master-0 kubenswrapper[6976]: I0318 08:49:19.228930 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-8zgz4" Mar 18 08:49:19.229311 master-0 kubenswrapper[6976]: I0318 08:49:19.229243 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 08:49:19.243464 master-0 kubenswrapper[6976]: I0318 08:49:19.243112 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c945f8f5b-967lx"] Mar 18 08:49:19.243643 master-0 kubenswrapper[6976]: I0318 08:49:19.243595 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 08:49:19.245274 master-0 kubenswrapper[6976]: I0318 08:49:19.245160 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg"] Mar 18 08:49:19.254184 master-0 kubenswrapper[6976]: I0318 08:49:19.254145 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"3253d87f-ae48-42cf-950f-f508a9b82d0d","Type":"ContainerStarted","Data":"6669c488a020cf374cca62487f896819e27005e13ddd29853b483ea8a721d767"} Mar 18 08:49:19.264251 master-0 kubenswrapper[6976]: I0318 08:49:19.264209 6976 generic.go:334] "Generic (PLEG): container finished" podID="85d361a2-3f83-4857-b96e-3e98fcf33463" containerID="ced431e20627e427a21cb8fd42c6730cdc4485d805ff7ffe44cd7d69e197e11e" exitCode=0 Mar 18 08:49:19.264398 master-0 kubenswrapper[6976]: I0318 08:49:19.264256 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" event={"ID":"85d361a2-3f83-4857-b96e-3e98fcf33463","Type":"ContainerDied","Data":"ced431e20627e427a21cb8fd42c6730cdc4485d805ff7ffe44cd7d69e197e11e"} Mar 18 08:49:19.264398 master-0 kubenswrapper[6976]: I0318 08:49:19.264284 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" event={"ID":"85d361a2-3f83-4857-b96e-3e98fcf33463","Type":"ContainerDied","Data":"b2a09192199dc47c2741f7796cc99b6c355559f7813fa31bd13f72c5529a9df3"} Mar 18 08:49:19.264398 master-0 kubenswrapper[6976]: I0318 08:49:19.264280 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr" Mar 18 08:49:19.264398 master-0 kubenswrapper[6976]: I0318 08:49:19.264304 6976 scope.go:117] "RemoveContainer" containerID="ced431e20627e427a21cb8fd42c6730cdc4485d805ff7ffe44cd7d69e197e11e" Mar 18 08:49:19.276293 master-0 kubenswrapper[6976]: I0318 08:49:19.276250 6976 scope.go:117] "RemoveContainer" containerID="ced431e20627e427a21cb8fd42c6730cdc4485d805ff7ffe44cd7d69e197e11e" Mar 18 08:49:19.278270 master-0 kubenswrapper[6976]: E0318 08:49:19.278198 6976 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ced431e20627e427a21cb8fd42c6730cdc4485d805ff7ffe44cd7d69e197e11e\": container with ID starting with ced431e20627e427a21cb8fd42c6730cdc4485d805ff7ffe44cd7d69e197e11e not found: ID does not exist" containerID="ced431e20627e427a21cb8fd42c6730cdc4485d805ff7ffe44cd7d69e197e11e" Mar 18 08:49:19.278358 master-0 kubenswrapper[6976]: I0318 08:49:19.278275 6976 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ced431e20627e427a21cb8fd42c6730cdc4485d805ff7ffe44cd7d69e197e11e"} err="failed to get container status \"ced431e20627e427a21cb8fd42c6730cdc4485d805ff7ffe44cd7d69e197e11e\": rpc error: code = NotFound desc = could not find container \"ced431e20627e427a21cb8fd42c6730cdc4485d805ff7ffe44cd7d69e197e11e\": container with ID starting with ced431e20627e427a21cb8fd42c6730cdc4485d805ff7ffe44cd7d69e197e11e not found: ID does not exist" Mar 18 08:49:19.304910 master-0 kubenswrapper[6976]: I0318 08:49:19.304864 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr"] Mar 18 08:49:19.313160 master-0 kubenswrapper[6976]: I0318 08:49:19.313108 6976 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-t9zrr"] Mar 18 08:49:19.364615 master-0 kubenswrapper[6976]: I0318 08:49:19.363903 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7d58488df-q58jp"] Mar 18 08:49:19.364615 master-0 kubenswrapper[6976]: I0318 08:49:19.364427 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 08:49:19.368599 master-0 kubenswrapper[6976]: I0318 08:49:19.368327 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 08:49:19.368599 master-0 kubenswrapper[6976]: I0318 08:49:19.368466 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-kg24z" Mar 18 08:49:19.368807 master-0 kubenswrapper[6976]: I0318 08:49:19.368609 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 08:49:19.368807 master-0 kubenswrapper[6976]: I0318 08:49:19.368708 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 08:49:19.375587 master-0 kubenswrapper[6976]: I0318 08:49:19.375514 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rht5n\" (UniqueName: \"kubernetes.io/projected/d7479b08-17be-4127-893b-c13007c8e4b7-kube-api-access-rht5n\") pod \"route-controller-manager-85d945cb54-px8bg\" (UID: \"d7479b08-17be-4127-893b-c13007c8e4b7\") " pod="openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg" Mar 18 08:49:19.375587 master-0 kubenswrapper[6976]: I0318 08:49:19.375590 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59c421f2-2154-47eb-bf86-e5fe1b980d76-client-ca\") pod \"controller-manager-7c945f8f5b-967lx\" (UID: \"59c421f2-2154-47eb-bf86-e5fe1b980d76\") " pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" Mar 18 08:49:19.375757 master-0 kubenswrapper[6976]: I0318 08:49:19.375663 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59c421f2-2154-47eb-bf86-e5fe1b980d76-proxy-ca-bundles\") pod \"controller-manager-7c945f8f5b-967lx\" (UID: \"59c421f2-2154-47eb-bf86-e5fe1b980d76\") " pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" Mar 18 08:49:19.375757 master-0 kubenswrapper[6976]: I0318 08:49:19.375696 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c421f2-2154-47eb-bf86-e5fe1b980d76-config\") pod \"controller-manager-7c945f8f5b-967lx\" (UID: \"59c421f2-2154-47eb-bf86-e5fe1b980d76\") " pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" Mar 18 08:49:19.375757 master-0 kubenswrapper[6976]: I0318 08:49:19.375747 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7479b08-17be-4127-893b-c13007c8e4b7-config\") pod \"route-controller-manager-85d945cb54-px8bg\" (UID: \"d7479b08-17be-4127-893b-c13007c8e4b7\") " pod="openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg" Mar 18 08:49:19.375843 master-0 kubenswrapper[6976]: I0318 08:49:19.375784 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7479b08-17be-4127-893b-c13007c8e4b7-client-ca\") pod \"route-controller-manager-85d945cb54-px8bg\" (UID: \"d7479b08-17be-4127-893b-c13007c8e4b7\") " pod="openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg" Mar 18 08:49:19.375843 master-0 kubenswrapper[6976]: I0318 08:49:19.375811 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvqwn\" (UniqueName: \"kubernetes.io/projected/59c421f2-2154-47eb-bf86-e5fe1b980d76-kube-api-access-kvqwn\") pod \"controller-manager-7c945f8f5b-967lx\" (UID: \"59c421f2-2154-47eb-bf86-e5fe1b980d76\") " pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" Mar 18 08:49:19.375893 master-0 kubenswrapper[6976]: I0318 08:49:19.375855 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59c421f2-2154-47eb-bf86-e5fe1b980d76-serving-cert\") pod \"controller-manager-7c945f8f5b-967lx\" (UID: \"59c421f2-2154-47eb-bf86-e5fe1b980d76\") " pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" Mar 18 08:49:19.375927 master-0 kubenswrapper[6976]: I0318 08:49:19.375908 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7479b08-17be-4127-893b-c13007c8e4b7-serving-cert\") pod \"route-controller-manager-85d945cb54-px8bg\" (UID: \"d7479b08-17be-4127-893b-c13007c8e4b7\") " pod="openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg" Mar 18 08:49:19.476681 master-0 kubenswrapper[6976]: I0318 08:49:19.476578 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59c421f2-2154-47eb-bf86-e5fe1b980d76-proxy-ca-bundles\") pod \"controller-manager-7c945f8f5b-967lx\" (UID: \"59c421f2-2154-47eb-bf86-e5fe1b980d76\") " pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" Mar 18 08:49:19.476681 master-0 kubenswrapper[6976]: I0318 08:49:19.476627 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c421f2-2154-47eb-bf86-e5fe1b980d76-config\") pod \"controller-manager-7c945f8f5b-967lx\" (UID: \"59c421f2-2154-47eb-bf86-e5fe1b980d76\") " pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" Mar 18 08:49:19.476681 master-0 kubenswrapper[6976]: I0318 08:49:19.476668 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9cc640bf-cb5f-4493-b47b-6ea6f524525e-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 08:49:19.476895 master-0 kubenswrapper[6976]: I0318 08:49:19.476692 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9cc640bf-cb5f-4493-b47b-6ea6f524525e-service-ca\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 08:49:19.476895 master-0 kubenswrapper[6976]: I0318 08:49:19.476711 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7479b08-17be-4127-893b-c13007c8e4b7-config\") pod \"route-controller-manager-85d945cb54-px8bg\" (UID: \"d7479b08-17be-4127-893b-c13007c8e4b7\") " pod="openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg" Mar 18 08:49:19.476895 master-0 kubenswrapper[6976]: I0318 08:49:19.476743 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7479b08-17be-4127-893b-c13007c8e4b7-client-ca\") pod \"route-controller-manager-85d945cb54-px8bg\" (UID: \"d7479b08-17be-4127-893b-c13007c8e4b7\") " pod="openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg" Mar 18 08:49:19.476895 master-0 kubenswrapper[6976]: I0318 08:49:19.476762 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvqwn\" (UniqueName: \"kubernetes.io/projected/59c421f2-2154-47eb-bf86-e5fe1b980d76-kube-api-access-kvqwn\") pod \"controller-manager-7c945f8f5b-967lx\" (UID: \"59c421f2-2154-47eb-bf86-e5fe1b980d76\") " pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" Mar 18 08:49:19.476895 master-0 kubenswrapper[6976]: I0318 08:49:19.476789 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9cc640bf-cb5f-4493-b47b-6ea6f524525e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 08:49:19.476895 master-0 kubenswrapper[6976]: I0318 08:49:19.476811 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cc640bf-cb5f-4493-b47b-6ea6f524525e-kube-api-access\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 08:49:19.476895 master-0 kubenswrapper[6976]: I0318 08:49:19.476829 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59c421f2-2154-47eb-bf86-e5fe1b980d76-serving-cert\") pod \"controller-manager-7c945f8f5b-967lx\" (UID: \"59c421f2-2154-47eb-bf86-e5fe1b980d76\") " pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" Mar 18 08:49:19.476895 master-0 kubenswrapper[6976]: I0318 08:49:19.476856 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7479b08-17be-4127-893b-c13007c8e4b7-serving-cert\") pod \"route-controller-manager-85d945cb54-px8bg\" (UID: \"d7479b08-17be-4127-893b-c13007c8e4b7\") " pod="openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg" Mar 18 08:49:19.476895 master-0 kubenswrapper[6976]: I0318 08:49:19.476874 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rht5n\" (UniqueName: \"kubernetes.io/projected/d7479b08-17be-4127-893b-c13007c8e4b7-kube-api-access-rht5n\") pod \"route-controller-manager-85d945cb54-px8bg\" (UID: \"d7479b08-17be-4127-893b-c13007c8e4b7\") " pod="openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg" Mar 18 08:49:19.476895 master-0 kubenswrapper[6976]: I0318 08:49:19.476890 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59c421f2-2154-47eb-bf86-e5fe1b980d76-client-ca\") pod \"controller-manager-7c945f8f5b-967lx\" (UID: \"59c421f2-2154-47eb-bf86-e5fe1b980d76\") " pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" Mar 18 08:49:19.477156 master-0 kubenswrapper[6976]: I0318 08:49:19.476909 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cc640bf-cb5f-4493-b47b-6ea6f524525e-serving-cert\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 08:49:19.478021 master-0 kubenswrapper[6976]: I0318 08:49:19.477995 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7479b08-17be-4127-893b-c13007c8e4b7-client-ca\") pod \"route-controller-manager-85d945cb54-px8bg\" (UID: \"d7479b08-17be-4127-893b-c13007c8e4b7\") " pod="openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg" Mar 18 08:49:19.478158 master-0 kubenswrapper[6976]: I0318 08:49:19.478127 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59c421f2-2154-47eb-bf86-e5fe1b980d76-client-ca\") pod \"controller-manager-7c945f8f5b-967lx\" (UID: \"59c421f2-2154-47eb-bf86-e5fe1b980d76\") " pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" Mar 18 08:49:19.478263 master-0 kubenswrapper[6976]: I0318 08:49:19.478236 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59c421f2-2154-47eb-bf86-e5fe1b980d76-proxy-ca-bundles\") pod \"controller-manager-7c945f8f5b-967lx\" (UID: \"59c421f2-2154-47eb-bf86-e5fe1b980d76\") " pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" Mar 18 08:49:19.478659 master-0 kubenswrapper[6976]: I0318 08:49:19.478633 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7479b08-17be-4127-893b-c13007c8e4b7-config\") pod \"route-controller-manager-85d945cb54-px8bg\" (UID: \"d7479b08-17be-4127-893b-c13007c8e4b7\") " pod="openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg" Mar 18 08:49:19.478793 master-0 kubenswrapper[6976]: I0318 08:49:19.478762 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c421f2-2154-47eb-bf86-e5fe1b980d76-config\") pod \"controller-manager-7c945f8f5b-967lx\" (UID: \"59c421f2-2154-47eb-bf86-e5fe1b980d76\") " pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" Mar 18 08:49:19.480242 master-0 kubenswrapper[6976]: I0318 08:49:19.480218 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59c421f2-2154-47eb-bf86-e5fe1b980d76-serving-cert\") pod \"controller-manager-7c945f8f5b-967lx\" (UID: \"59c421f2-2154-47eb-bf86-e5fe1b980d76\") " pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" Mar 18 08:49:19.480309 master-0 kubenswrapper[6976]: I0318 08:49:19.480289 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7479b08-17be-4127-893b-c13007c8e4b7-serving-cert\") pod \"route-controller-manager-85d945cb54-px8bg\" (UID: \"d7479b08-17be-4127-893b-c13007c8e4b7\") " pod="openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg" Mar 18 08:49:19.496242 master-0 kubenswrapper[6976]: I0318 08:49:19.496196 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvqwn\" (UniqueName: \"kubernetes.io/projected/59c421f2-2154-47eb-bf86-e5fe1b980d76-kube-api-access-kvqwn\") pod \"controller-manager-7c945f8f5b-967lx\" (UID: \"59c421f2-2154-47eb-bf86-e5fe1b980d76\") " pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" Mar 18 08:49:19.497035 master-0 kubenswrapper[6976]: I0318 08:49:19.496977 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rht5n\" (UniqueName: \"kubernetes.io/projected/d7479b08-17be-4127-893b-c13007c8e4b7-kube-api-access-rht5n\") pod \"route-controller-manager-85d945cb54-px8bg\" (UID: \"d7479b08-17be-4127-893b-c13007c8e4b7\") " pod="openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg" Mar 18 08:49:19.570468 master-0 kubenswrapper[6976]: I0318 08:49:19.570408 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" Mar 18 08:49:19.577705 master-0 kubenswrapper[6976]: I0318 08:49:19.577624 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9cc640bf-cb5f-4493-b47b-6ea6f524525e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 08:49:19.577705 master-0 kubenswrapper[6976]: I0318 08:49:19.577684 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cc640bf-cb5f-4493-b47b-6ea6f524525e-kube-api-access\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 08:49:19.577793 master-0 kubenswrapper[6976]: I0318 08:49:19.577729 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9cc640bf-cb5f-4493-b47b-6ea6f524525e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 08:49:19.577833 master-0 kubenswrapper[6976]: I0318 08:49:19.577777 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cc640bf-cb5f-4493-b47b-6ea6f524525e-serving-cert\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 08:49:19.577895 master-0 kubenswrapper[6976]: I0318 08:49:19.577862 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9cc640bf-cb5f-4493-b47b-6ea6f524525e-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 08:49:19.577930 master-0 kubenswrapper[6976]: I0318 08:49:19.577903 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9cc640bf-cb5f-4493-b47b-6ea6f524525e-service-ca\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 08:49:19.578631 master-0 kubenswrapper[6976]: I0318 08:49:19.578484 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9cc640bf-cb5f-4493-b47b-6ea6f524525e-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 08:49:19.579764 master-0 kubenswrapper[6976]: I0318 08:49:19.579739 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9cc640bf-cb5f-4493-b47b-6ea6f524525e-service-ca\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 08:49:19.582542 master-0 kubenswrapper[6976]: I0318 08:49:19.582487 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cc640bf-cb5f-4493-b47b-6ea6f524525e-serving-cert\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 08:49:19.592247 master-0 kubenswrapper[6976]: I0318 08:49:19.592210 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg" Mar 18 08:49:19.603843 master-0 kubenswrapper[6976]: I0318 08:49:19.603793 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cc640bf-cb5f-4493-b47b-6ea6f524525e-kube-api-access\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 08:49:19.738425 master-0 kubenswrapper[6976]: I0318 08:49:19.738326 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 08:49:19.970864 master-0 kubenswrapper[6976]: I0318 08:49:19.970830 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c945f8f5b-967lx"] Mar 18 08:49:20.017189 master-0 kubenswrapper[6976]: W0318 08:49:20.017102 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59c421f2_2154_47eb_bf86_e5fe1b980d76.slice/crio-81ead4c8f220d1963f29e356d7dcbc6fa146175546302c4e747d85a34e03f0cd WatchSource:0}: Error finding container 81ead4c8f220d1963f29e356d7dcbc6fa146175546302c4e747d85a34e03f0cd: Status 404 returned error can't find the container with id 81ead4c8f220d1963f29e356d7dcbc6fa146175546302c4e747d85a34e03f0cd Mar 18 08:49:20.045463 master-0 kubenswrapper[6976]: I0318 08:49:20.045414 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg"] Mar 18 08:49:20.059879 master-0 kubenswrapper[6976]: W0318 08:49:20.059838 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7479b08_17be_4127_893b_c13007c8e4b7.slice/crio-1e051b7faa69e903ae0f651dcaa043ed1f5ae5f07bccc322860c3fdfaf058d32 WatchSource:0}: Error finding container 1e051b7faa69e903ae0f651dcaa043ed1f5ae5f07bccc322860c3fdfaf058d32: Status 404 returned error can't find the container with id 1e051b7faa69e903ae0f651dcaa043ed1f5ae5f07bccc322860c3fdfaf058d32 Mar 18 08:49:20.273713 master-0 kubenswrapper[6976]: I0318 08:49:20.273602 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"3253d87f-ae48-42cf-950f-f508a9b82d0d","Type":"ContainerStarted","Data":"f4700f538c7d454f7c9d134fd47d7a5c2ce673d0b9bd02c96a2dfc730672550e"} Mar 18 08:49:20.282466 master-0 kubenswrapper[6976]: I0318 08:49:20.282296 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg" event={"ID":"d7479b08-17be-4127-893b-c13007c8e4b7","Type":"ContainerStarted","Data":"e7d529e7b664f8bc925f1171003f5b0bb292cf1e058d32784adb704c8243994d"} Mar 18 08:49:20.282466 master-0 kubenswrapper[6976]: I0318 08:49:20.282365 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg" event={"ID":"d7479b08-17be-4127-893b-c13007c8e4b7","Type":"ContainerStarted","Data":"1e051b7faa69e903ae0f651dcaa043ed1f5ae5f07bccc322860c3fdfaf058d32"} Mar 18 08:49:20.282851 master-0 kubenswrapper[6976]: I0318 08:49:20.282734 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg" Mar 18 08:49:20.283382 master-0 kubenswrapper[6976]: I0318 08:49:20.283361 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" event={"ID":"9cc640bf-cb5f-4493-b47b-6ea6f524525e","Type":"ContainerStarted","Data":"68dbdaacdfd0decbc5714d0c2d9b1c957599fedf93c87f030a7ff598c7a78381"} Mar 18 08:49:20.283440 master-0 kubenswrapper[6976]: I0318 08:49:20.283384 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" event={"ID":"9cc640bf-cb5f-4493-b47b-6ea6f524525e","Type":"ContainerStarted","Data":"0dc14cc88891929c02d96732c893456d82425d1db68dfef9ae085c39e17cfc21"} Mar 18 08:49:20.285134 master-0 kubenswrapper[6976]: I0318 08:49:20.285109 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" event={"ID":"59c421f2-2154-47eb-bf86-e5fe1b980d76","Type":"ContainerStarted","Data":"f7406136c7d1b5446d31fb2d477916274551fd8657f89454d9fad0aeccedb87c"} Mar 18 08:49:20.285208 master-0 kubenswrapper[6976]: I0318 08:49:20.285138 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" event={"ID":"59c421f2-2154-47eb-bf86-e5fe1b980d76","Type":"ContainerStarted","Data":"81ead4c8f220d1963f29e356d7dcbc6fa146175546302c4e747d85a34e03f0cd"} Mar 18 08:49:20.285857 master-0 kubenswrapper[6976]: I0318 08:49:20.285826 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" Mar 18 08:49:20.291325 master-0 kubenswrapper[6976]: I0318 08:49:20.291292 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" Mar 18 08:49:20.296092 master-0 kubenswrapper[6976]: I0318 08:49:20.295925 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=2.295912398 podStartE2EDuration="2.295912398s" podCreationTimestamp="2026-03-18 08:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:20.294076857 +0000 UTC m=+59.879678452" watchObservedRunningTime="2026-03-18 08:49:20.295912398 +0000 UTC m=+59.881513993" Mar 18 08:49:20.320356 master-0 kubenswrapper[6976]: I0318 08:49:20.320287 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" podStartSLOduration=3.320092229 podStartE2EDuration="3.320092229s" podCreationTimestamp="2026-03-18 08:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:20.318923277 +0000 UTC m=+59.904524872" watchObservedRunningTime="2026-03-18 08:49:20.320092229 +0000 UTC m=+59.905693824" Mar 18 08:49:20.343756 master-0 kubenswrapper[6976]: I0318 08:49:20.343675 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg" podStartSLOduration=3.343656104 podStartE2EDuration="3.343656104s" podCreationTimestamp="2026-03-18 08:49:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:20.34101945 +0000 UTC m=+59.926621055" watchObservedRunningTime="2026-03-18 08:49:20.343656104 +0000 UTC m=+59.929257709" Mar 18 08:49:20.362127 master-0 kubenswrapper[6976]: I0318 08:49:20.362051 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" podStartSLOduration=1.362033244 podStartE2EDuration="1.362033244s" podCreationTimestamp="2026-03-18 08:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:49:20.360500631 +0000 UTC m=+59.946102236" watchObservedRunningTime="2026-03-18 08:49:20.362033244 +0000 UTC m=+59.947634839" Mar 18 08:49:20.553399 master-0 kubenswrapper[6976]: I0318 08:49:20.553256 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg" Mar 18 08:49:20.628144 master-0 kubenswrapper[6976]: I0318 08:49:20.628083 6976 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85d361a2-3f83-4857-b96e-3e98fcf33463" path="/var/lib/kubelet/pods/85d361a2-3f83-4857-b96e-3e98fcf33463/volumes" Mar 18 08:49:22.523922 master-0 kubenswrapper[6976]: I0318 08:49:22.523874 6976 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 18 08:49:22.524559 master-0 kubenswrapper[6976]: I0318 08:49:22.524084 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" containerID="cri-o://0d687a1cc19edef86e363b078065411464f70598e58ee47fda14e020c46c6047" gracePeriod=30 Mar 18 08:49:22.524559 master-0 kubenswrapper[6976]: I0318 08:49:22.524131 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" containerID="cri-o://e2df9cabd8b88e525a338baa668bc5602df79a0ef876cd2f299ee939dbfff1e9" gracePeriod=30 Mar 18 08:49:22.526720 master-0 kubenswrapper[6976]: I0318 08:49:22.526507 6976 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 18 08:49:22.527140 master-0 kubenswrapper[6976]: E0318 08:49:22.526777 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 18 08:49:22.527140 master-0 kubenswrapper[6976]: I0318 08:49:22.526799 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 18 08:49:22.527140 master-0 kubenswrapper[6976]: E0318 08:49:22.526814 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 18 08:49:22.527140 master-0 kubenswrapper[6976]: I0318 08:49:22.526826 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 18 08:49:22.527140 master-0 kubenswrapper[6976]: I0318 08:49:22.526974 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 18 08:49:22.527140 master-0 kubenswrapper[6976]: I0318 08:49:22.526996 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 18 08:49:22.536967 master-0 kubenswrapper[6976]: I0318 08:49:22.529399 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 08:49:22.598741 master-0 kubenswrapper[6976]: I0318 08:49:22.598700 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-pj485" Mar 18 08:49:22.635902 master-0 kubenswrapper[6976]: I0318 08:49:22.635785 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:22.636135 master-0 kubenswrapper[6976]: I0318 08:49:22.635916 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:22.636135 master-0 kubenswrapper[6976]: I0318 08:49:22.636095 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:22.636223 master-0 kubenswrapper[6976]: I0318 08:49:22.636165 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:22.636327 master-0 kubenswrapper[6976]: I0318 08:49:22.636277 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:22.636490 master-0 kubenswrapper[6976]: I0318 08:49:22.636433 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:22.737452 master-0 kubenswrapper[6976]: I0318 08:49:22.737371 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:22.737452 master-0 kubenswrapper[6976]: I0318 08:49:22.737438 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:22.737803 master-0 kubenswrapper[6976]: I0318 08:49:22.737537 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:22.737803 master-0 kubenswrapper[6976]: I0318 08:49:22.737707 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:22.737913 master-0 kubenswrapper[6976]: I0318 08:49:22.737819 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:22.737913 master-0 kubenswrapper[6976]: I0318 08:49:22.737903 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:22.738078 master-0 kubenswrapper[6976]: I0318 08:49:22.737918 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:22.738078 master-0 kubenswrapper[6976]: I0318 08:49:22.737945 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:22.738078 master-0 kubenswrapper[6976]: I0318 08:49:22.738026 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:22.738445 master-0 kubenswrapper[6976]: I0318 08:49:22.738096 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:22.738445 master-0 kubenswrapper[6976]: I0318 08:49:22.738141 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:22.738445 master-0 kubenswrapper[6976]: I0318 08:49:22.738411 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:49:25.475949 master-0 kubenswrapper[6976]: I0318 08:49:25.475865 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:49:25.476696 master-0 kubenswrapper[6976]: I0318 08:49:25.476001 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-k6xp5\" (UID: \"2d0da6e3-3887-4361-8eae-e7447f9ff72c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:49:25.476696 master-0 kubenswrapper[6976]: I0318 08:49:25.476038 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert\") pod \"olm-operator-5c9796789-twp27\" (UID: \"c00ee838-424f-482b-942f-08f0952a5ccd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:49:25.485059 master-0 kubenswrapper[6976]: I0318 08:49:25.481356 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-k6xp5\" (UID: \"2d0da6e3-3887-4361-8eae-e7447f9ff72c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:49:25.485059 master-0 kubenswrapper[6976]: I0318 08:49:25.481371 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert\") pod \"olm-operator-5c9796789-twp27\" (UID: \"c00ee838-424f-482b-942f-08f0952a5ccd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:49:25.488124 master-0 kubenswrapper[6976]: I0318 08:49:25.487225 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:49:25.577823 master-0 kubenswrapper[6976]: I0318 08:49:25.577736 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:49:25.578040 master-0 kubenswrapper[6976]: I0318 08:49:25.577860 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-25rbq\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:49:25.578040 master-0 kubenswrapper[6976]: I0318 08:49:25.577930 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:49:25.578186 master-0 kubenswrapper[6976]: I0318 08:49:25.578164 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert\") pod \"catalog-operator-68f85b4d6c-hhn7l\" (UID: \"f6833a48-fccb-42bd-ac90-29f08d5bf7e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:49:25.581135 master-0 kubenswrapper[6976]: I0318 08:49:25.581116 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert\") pod \"catalog-operator-68f85b4d6c-hhn7l\" (UID: \"f6833a48-fccb-42bd-ac90-29f08d5bf7e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:49:25.582473 master-0 kubenswrapper[6976]: I0318 08:49:25.582440 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-25rbq\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:49:25.583188 master-0 kubenswrapper[6976]: I0318 08:49:25.583134 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:49:25.583797 master-0 kubenswrapper[6976]: I0318 08:49:25.583765 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:49:25.754438 master-0 kubenswrapper[6976]: I0318 08:49:25.754071 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:49:25.755105 master-0 kubenswrapper[6976]: I0318 08:49:25.755051 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:49:25.755206 master-0 kubenswrapper[6976]: I0318 08:49:25.755108 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:49:25.757006 master-0 kubenswrapper[6976]: I0318 08:49:25.756290 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:49:25.757006 master-0 kubenswrapper[6976]: I0318 08:49:25.756418 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:49:25.757938 master-0 kubenswrapper[6976]: I0318 08:49:25.757871 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:49:25.759342 master-0 kubenswrapper[6976]: I0318 08:49:25.759285 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:49:31.283822 master-0 kubenswrapper[6976]: I0318 08:49:31.283761 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_9ecb08ad-f7f1-466e-9b8a-b162137bfebd/installer/0.log" Mar 18 08:49:31.284253 master-0 kubenswrapper[6976]: I0318 08:49:31.283865 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:49:31.341355 master-0 kubenswrapper[6976]: I0318 08:49:31.341295 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_9ecb08ad-f7f1-466e-9b8a-b162137bfebd/installer/0.log" Mar 18 08:49:31.341611 master-0 kubenswrapper[6976]: I0318 08:49:31.341363 6976 generic.go:334] "Generic (PLEG): container finished" podID="9ecb08ad-f7f1-466e-9b8a-b162137bfebd" containerID="41f1b4a9ad9735ba06f958721642c2418eaf1c66f2ef6009427324642c726c07" exitCode=1 Mar 18 08:49:31.341611 master-0 kubenswrapper[6976]: I0318 08:49:31.341403 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"9ecb08ad-f7f1-466e-9b8a-b162137bfebd","Type":"ContainerDied","Data":"41f1b4a9ad9735ba06f958721642c2418eaf1c66f2ef6009427324642c726c07"} Mar 18 08:49:31.341611 master-0 kubenswrapper[6976]: I0318 08:49:31.341439 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"9ecb08ad-f7f1-466e-9b8a-b162137bfebd","Type":"ContainerDied","Data":"82b8a76b2600434ebee5ee4ed08dbb29d8146560821e8d2a1127da598ab1b928"} Mar 18 08:49:31.341611 master-0 kubenswrapper[6976]: I0318 08:49:31.341440 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 18 08:49:31.341611 master-0 kubenswrapper[6976]: I0318 08:49:31.341462 6976 scope.go:117] "RemoveContainer" containerID="41f1b4a9ad9735ba06f958721642c2418eaf1c66f2ef6009427324642c726c07" Mar 18 08:49:31.356101 master-0 kubenswrapper[6976]: I0318 08:49:31.356039 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ecb08ad-f7f1-466e-9b8a-b162137bfebd-var-lock\") pod \"9ecb08ad-f7f1-466e-9b8a-b162137bfebd\" (UID: \"9ecb08ad-f7f1-466e-9b8a-b162137bfebd\") " Mar 18 08:49:31.356101 master-0 kubenswrapper[6976]: I0318 08:49:31.356092 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ecb08ad-f7f1-466e-9b8a-b162137bfebd-kube-api-access\") pod \"9ecb08ad-f7f1-466e-9b8a-b162137bfebd\" (UID: \"9ecb08ad-f7f1-466e-9b8a-b162137bfebd\") " Mar 18 08:49:31.356274 master-0 kubenswrapper[6976]: I0318 08:49:31.356138 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ecb08ad-f7f1-466e-9b8a-b162137bfebd-var-lock" (OuterVolumeSpecName: "var-lock") pod "9ecb08ad-f7f1-466e-9b8a-b162137bfebd" (UID: "9ecb08ad-f7f1-466e-9b8a-b162137bfebd"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:49:31.356360 master-0 kubenswrapper[6976]: I0318 08:49:31.356333 6976 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9ecb08ad-f7f1-466e-9b8a-b162137bfebd-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:31.362211 master-0 kubenswrapper[6976]: I0318 08:49:31.362175 6976 scope.go:117] "RemoveContainer" containerID="41f1b4a9ad9735ba06f958721642c2418eaf1c66f2ef6009427324642c726c07" Mar 18 08:49:31.363577 master-0 kubenswrapper[6976]: E0318 08:49:31.363516 6976 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41f1b4a9ad9735ba06f958721642c2418eaf1c66f2ef6009427324642c726c07\": container with ID starting with 41f1b4a9ad9735ba06f958721642c2418eaf1c66f2ef6009427324642c726c07 not found: ID does not exist" containerID="41f1b4a9ad9735ba06f958721642c2418eaf1c66f2ef6009427324642c726c07" Mar 18 08:49:31.363651 master-0 kubenswrapper[6976]: I0318 08:49:31.363557 6976 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41f1b4a9ad9735ba06f958721642c2418eaf1c66f2ef6009427324642c726c07"} err="failed to get container status \"41f1b4a9ad9735ba06f958721642c2418eaf1c66f2ef6009427324642c726c07\": rpc error: code = NotFound desc = could not find container \"41f1b4a9ad9735ba06f958721642c2418eaf1c66f2ef6009427324642c726c07\": container with ID starting with 41f1b4a9ad9735ba06f958721642c2418eaf1c66f2ef6009427324642c726c07 not found: ID does not exist" Mar 18 08:49:31.364679 master-0 kubenswrapper[6976]: I0318 08:49:31.364627 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ecb08ad-f7f1-466e-9b8a-b162137bfebd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9ecb08ad-f7f1-466e-9b8a-b162137bfebd" (UID: "9ecb08ad-f7f1-466e-9b8a-b162137bfebd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:49:31.457561 master-0 kubenswrapper[6976]: I0318 08:49:31.457491 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ecb08ad-f7f1-466e-9b8a-b162137bfebd-kubelet-dir\") pod \"9ecb08ad-f7f1-466e-9b8a-b162137bfebd\" (UID: \"9ecb08ad-f7f1-466e-9b8a-b162137bfebd\") " Mar 18 08:49:31.457858 master-0 kubenswrapper[6976]: I0318 08:49:31.457628 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ecb08ad-f7f1-466e-9b8a-b162137bfebd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9ecb08ad-f7f1-466e-9b8a-b162137bfebd" (UID: "9ecb08ad-f7f1-466e-9b8a-b162137bfebd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:49:31.457941 master-0 kubenswrapper[6976]: I0318 08:49:31.457874 6976 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9ecb08ad-f7f1-466e-9b8a-b162137bfebd-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:31.457941 master-0 kubenswrapper[6976]: I0318 08:49:31.457900 6976 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9ecb08ad-f7f1-466e-9b8a-b162137bfebd-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:35.366599 master-0 kubenswrapper[6976]: I0318 08:49:35.366516 6976 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="f277f2362065cc73017df28062682dd2e7050aa19629168ac1d2fca1e8c0c640" exitCode=1 Mar 18 08:49:35.367243 master-0 kubenswrapper[6976]: I0318 08:49:35.366612 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"f277f2362065cc73017df28062682dd2e7050aa19629168ac1d2fca1e8c0c640"} Mar 18 08:49:35.367243 master-0 kubenswrapper[6976]: I0318 08:49:35.366673 6976 scope.go:117] "RemoveContainer" containerID="3723d82df6a282e88b524b3a08afe8873f1f72923890a0d6f5612d293d44a84b" Mar 18 08:49:35.367418 master-0 kubenswrapper[6976]: I0318 08:49:35.367370 6976 scope.go:117] "RemoveContainer" containerID="f277f2362065cc73017df28062682dd2e7050aa19629168ac1d2fca1e8c0c640" Mar 18 08:49:35.572502 master-0 kubenswrapper[6976]: E0318 08:49:35.572427 6976 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 08:49:35.573123 master-0 kubenswrapper[6976]: I0318 08:49:35.573081 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 08:49:35.589960 master-0 kubenswrapper[6976]: W0318 08:49:35.589883 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-941db87289f500e600dd080fc243da957c0e3edabf0294787007e282aa2564e5 WatchSource:0}: Error finding container 941db87289f500e600dd080fc243da957c0e3edabf0294787007e282aa2564e5: Status 404 returned error can't find the container with id 941db87289f500e600dd080fc243da957c0e3edabf0294787007e282aa2564e5 Mar 18 08:49:36.376199 master-0 kubenswrapper[6976]: I0318 08:49:36.376112 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"e33371633d805977fa012e618334c5eea89b46efb1d6f0253d45e92e47bbf964"} Mar 18 08:49:36.378394 master-0 kubenswrapper[6976]: I0318 08:49:36.378318 6976 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="05713cda00e01f4fa6b33e36c9677b903f2b97a2f623ad2f25f79ec8b0a1264c" exitCode=0 Mar 18 08:49:36.378394 master-0 kubenswrapper[6976]: I0318 08:49:36.378384 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"05713cda00e01f4fa6b33e36c9677b903f2b97a2f623ad2f25f79ec8b0a1264c"} Mar 18 08:49:36.378675 master-0 kubenswrapper[6976]: I0318 08:49:36.378418 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"941db87289f500e600dd080fc243da957c0e3edabf0294787007e282aa2564e5"} Mar 18 08:49:37.393227 master-0 kubenswrapper[6976]: I0318 08:49:37.393110 6976 generic.go:334] "Generic (PLEG): container finished" podID="c393a935-1821-4742-b1bb-0ee52ada5434" containerID="82098974401c2078cdae0b9cda75b7a09e79d037d34e1919901dd8a75694e9fb" exitCode=0 Mar 18 08:49:37.393227 master-0 kubenswrapper[6976]: I0318 08:49:37.393196 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"c393a935-1821-4742-b1bb-0ee52ada5434","Type":"ContainerDied","Data":"82098974401c2078cdae0b9cda75b7a09e79d037d34e1919901dd8a75694e9fb"} Mar 18 08:49:38.560837 master-0 kubenswrapper[6976]: I0318 08:49:38.560763 6976 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-j75sc container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" start-of-body= Mar 18 08:49:38.561531 master-0 kubenswrapper[6976]: I0318 08:49:38.560847 6976 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" podUID="e86268c9-7a83-4ccb-979a-feff00cb4b3e" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" Mar 18 08:49:38.803407 master-0 kubenswrapper[6976]: I0318 08:49:38.803344 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 08:49:38.959683 master-0 kubenswrapper[6976]: I0318 08:49:38.959359 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c393a935-1821-4742-b1bb-0ee52ada5434-var-lock\") pod \"c393a935-1821-4742-b1bb-0ee52ada5434\" (UID: \"c393a935-1821-4742-b1bb-0ee52ada5434\") " Mar 18 08:49:38.959683 master-0 kubenswrapper[6976]: I0318 08:49:38.959428 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c393a935-1821-4742-b1bb-0ee52ada5434-kubelet-dir\") pod \"c393a935-1821-4742-b1bb-0ee52ada5434\" (UID: \"c393a935-1821-4742-b1bb-0ee52ada5434\") " Mar 18 08:49:38.959683 master-0 kubenswrapper[6976]: I0318 08:49:38.959514 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c393a935-1821-4742-b1bb-0ee52ada5434-kube-api-access\") pod \"c393a935-1821-4742-b1bb-0ee52ada5434\" (UID: \"c393a935-1821-4742-b1bb-0ee52ada5434\") " Mar 18 08:49:38.959683 master-0 kubenswrapper[6976]: I0318 08:49:38.959592 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c393a935-1821-4742-b1bb-0ee52ada5434-var-lock" (OuterVolumeSpecName: "var-lock") pod "c393a935-1821-4742-b1bb-0ee52ada5434" (UID: "c393a935-1821-4742-b1bb-0ee52ada5434"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:49:38.959683 master-0 kubenswrapper[6976]: I0318 08:49:38.959614 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c393a935-1821-4742-b1bb-0ee52ada5434-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c393a935-1821-4742-b1bb-0ee52ada5434" (UID: "c393a935-1821-4742-b1bb-0ee52ada5434"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:49:38.960289 master-0 kubenswrapper[6976]: I0318 08:49:38.959844 6976 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c393a935-1821-4742-b1bb-0ee52ada5434-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:38.960289 master-0 kubenswrapper[6976]: I0318 08:49:38.959869 6976 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c393a935-1821-4742-b1bb-0ee52ada5434-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:38.964298 master-0 kubenswrapper[6976]: I0318 08:49:38.964212 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c393a935-1821-4742-b1bb-0ee52ada5434-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c393a935-1821-4742-b1bb-0ee52ada5434" (UID: "c393a935-1821-4742-b1bb-0ee52ada5434"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:49:39.060831 master-0 kubenswrapper[6976]: I0318 08:49:39.060691 6976 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c393a935-1821-4742-b1bb-0ee52ada5434-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:39.404805 master-0 kubenswrapper[6976]: I0318 08:49:39.404667 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"c393a935-1821-4742-b1bb-0ee52ada5434","Type":"ContainerDied","Data":"fb9c3d8b42af9b426126b726ec59a1846a0620aa47da4e39676529cdfdcfe989"} Mar 18 08:49:39.405086 master-0 kubenswrapper[6976]: I0318 08:49:39.405056 6976 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb9c3d8b42af9b426126b726ec59a1846a0620aa47da4e39676529cdfdcfe989" Mar 18 08:49:39.405280 master-0 kubenswrapper[6976]: I0318 08:49:39.404755 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 08:49:40.411729 master-0 kubenswrapper[6976]: I0318 08:49:40.411646 6976 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="0e74fe65579e23426bc0e51944122434e2b88b2a4dcfe52117fc70980e194f0d" exitCode=1 Mar 18 08:49:40.411729 master-0 kubenswrapper[6976]: I0318 08:49:40.411704 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerDied","Data":"0e74fe65579e23426bc0e51944122434e2b88b2a4dcfe52117fc70980e194f0d"} Mar 18 08:49:40.412613 master-0 kubenswrapper[6976]: I0318 08:49:40.412224 6976 scope.go:117] "RemoveContainer" containerID="0e74fe65579e23426bc0e51944122434e2b88b2a4dcfe52117fc70980e194f0d" Mar 18 08:49:41.418851 master-0 kubenswrapper[6976]: I0318 08:49:41.418792 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"6d079fc624d85c119aa8e55be99b13072fefe4d01c61b51b27950cfbdef8830f"} Mar 18 08:49:42.226552 master-0 kubenswrapper[6976]: E0318 08:49:42.226306 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:49:32Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:49:32Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:49:32Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:49:32Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\\\"],\\\"sizeBytes\\\":1637455533},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36\\\"],\\\"sizeBytes\\\":1238100502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015\\\"],\\\"sizeBytes\\\":991832673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\"],\\\"sizeBytes\\\":943841779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa\\\"],\\\"sizeBytes\\\":876160834},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a9e8da5c6114f062b814936d4db7a47a04d248e160d6bb28ad4e4a081496ee4\\\"],\\\"sizeBytes\\\":772943435},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016\\\"],\\\"sizeBytes\\\":687949580},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d\\\"],\\\"sizeBytes\\\":683195416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a\\\"],\\\"sizeBytes\\\":677942383},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4\\\"],\\\"sizeBytes\\\":621648710},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998\\\"],\\\"sizeBytes\\\":589386806},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55\\\"],\\\"sizeBytes\\\":582154903},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982\\\"],\\\"sizeBytes\\\":558211175},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89\\\"],\\\"sizeBytes\\\":548752816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\"],\\\"sizeBytes\\\":529326739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946\\\"],\\\"sizeBytes\\\":528956487},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"],\\\"sizeBytes\\\":518384969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1\\\"],\\\"sizeBytes\\\":517999161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\"],\\\"sizeBytes\\\":514984269},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427\\\"],\\\"sizeBytes\\\":513221333},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"],\\\"sizeBytes\\\":512274055},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098\\\"],\\\"sizeBytes\\\":511227324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11\\\"],\\\"sizeBytes\\\":511164375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458\\\"],\\\"sizeBytes\\\":508888171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"],\\\"sizeBytes\\\":508544745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71\\\"],\\\"sizeBytes\\\":507972093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3\\\"],\\\"sizeBytes\\\":506480167},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302\\\"],\\\"sizeBytes\\\":506395599},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634\\\"],\\\"sizeBytes\\\":505345991},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\\\"],\\\"sizeBytes\\\":505246690},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252\\\"],\\\"sizeBytes\\\":504625081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69\\\"],\\\"sizeBytes\\\":495994673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023\\\"],\\\"sizeBytes\\\":495065340},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b\\\"],\\\"sizeBytes\\\":487096305},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c4d5a681595e428ff4b5083648c13615eed80be9084a3d3fc68a0295079cb12\\\"],\\\"sizeBytes\\\":484187929},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc\\\"],\\\"sizeBytes\\\":468265024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\\\"],\\\"sizeBytes\\\":465090934},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24\\\"],\\\"sizeBytes\\\":463705930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85\\\"],\\\"sizeBytes\\\":448042136},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951ecfeba9b2da4b653034d09275f925396a79c2d8461b8a7c71c776fee67ba0\\\"],\\\"sizeBytes\\\":443272037},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483\\\"],\\\"sizeBytes\\\":438654374},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e66fd50be6f83ce321a566dfb76f3725b597374077d5af13813b928f6b1267e\\\"],\\\"sizeBytes\\\":411587146},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3a494212f1ba17f0f0980eef583218330eccb56eadf6b8cb0548c76d99b5014\\\"],\\\"sizeBytes\\\":407347125},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422\\\"],\\\"sizeBytes\\\":396521761}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 08:49:42.638875 master-0 kubenswrapper[6976]: E0318 08:49:42.638524 6976 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:49:43.565327 master-0 kubenswrapper[6976]: I0318 08:49:43.565264 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:49:43.783353 master-0 kubenswrapper[6976]: I0318 08:49:43.783259 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:49:46.783278 master-0 kubenswrapper[6976]: I0318 08:49:46.783214 6976 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 08:49:48.561191 master-0 kubenswrapper[6976]: I0318 08:49:48.561107 6976 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-j75sc container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" start-of-body= Mar 18 08:49:48.561682 master-0 kubenswrapper[6976]: I0318 08:49:48.561194 6976 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" podUID="e86268c9-7a83-4ccb-979a-feff00cb4b3e" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" Mar 18 08:49:49.385456 master-0 kubenswrapper[6976]: E0318 08:49:49.385377 6976 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 08:49:50.469414 master-0 kubenswrapper[6976]: I0318 08:49:50.469327 6976 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="8e22cea355c21809ea7ad1e7a2be9dfff724fa66b0b6eb753d91edc0a5a5e930" exitCode=0 Mar 18 08:49:50.469414 master-0 kubenswrapper[6976]: I0318 08:49:50.469422 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"8e22cea355c21809ea7ad1e7a2be9dfff724fa66b0b6eb753d91edc0a5a5e930"} Mar 18 08:49:50.473140 master-0 kubenswrapper[6976]: I0318 08:49:50.473048 6976 generic.go:334] "Generic (PLEG): container finished" podID="d664a6d0d2a24360dee10612610f1b59" containerID="e2df9cabd8b88e525a338baa668bc5602df79a0ef876cd2f299ee939dbfff1e9" exitCode=0 Mar 18 08:49:52.226828 master-0 kubenswrapper[6976]: E0318 08:49:52.226763 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:49:52.639134 master-0 kubenswrapper[6976]: E0318 08:49:52.639025 6976 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:49:52.718319 master-0 kubenswrapper[6976]: I0318 08:49:52.718252 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_d664a6d0d2a24360dee10612610f1b59/etcdctl/0.log" Mar 18 08:49:52.718502 master-0 kubenswrapper[6976]: I0318 08:49:52.718377 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:49:52.834663 master-0 kubenswrapper[6976]: I0318 08:49:52.834529 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"d664a6d0d2a24360dee10612610f1b59\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " Mar 18 08:49:52.834818 master-0 kubenswrapper[6976]: I0318 08:49:52.834684 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir" (OuterVolumeSpecName: "data-dir") pod "d664a6d0d2a24360dee10612610f1b59" (UID: "d664a6d0d2a24360dee10612610f1b59"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:49:52.834869 master-0 kubenswrapper[6976]: I0318 08:49:52.834805 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"d664a6d0d2a24360dee10612610f1b59\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " Mar 18 08:49:52.835048 master-0 kubenswrapper[6976]: I0318 08:49:52.834979 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs" (OuterVolumeSpecName: "certs") pod "d664a6d0d2a24360dee10612610f1b59" (UID: "d664a6d0d2a24360dee10612610f1b59"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:49:52.835084 master-0 kubenswrapper[6976]: I0318 08:49:52.835024 6976 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:52.936137 master-0 kubenswrapper[6976]: I0318 08:49:52.936043 6976 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 08:49:53.491977 master-0 kubenswrapper[6976]: I0318 08:49:53.491925 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_d664a6d0d2a24360dee10612610f1b59/etcdctl/0.log" Mar 18 08:49:53.492710 master-0 kubenswrapper[6976]: I0318 08:49:53.491984 6976 generic.go:334] "Generic (PLEG): container finished" podID="d664a6d0d2a24360dee10612610f1b59" containerID="0d687a1cc19edef86e363b078065411464f70598e58ee47fda14e020c46c6047" exitCode=137 Mar 18 08:49:53.492710 master-0 kubenswrapper[6976]: I0318 08:49:53.492039 6976 scope.go:117] "RemoveContainer" containerID="e2df9cabd8b88e525a338baa668bc5602df79a0ef876cd2f299ee939dbfff1e9" Mar 18 08:49:53.492710 master-0 kubenswrapper[6976]: I0318 08:49:53.492194 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:49:53.514412 master-0 kubenswrapper[6976]: I0318 08:49:53.512312 6976 scope.go:117] "RemoveContainer" containerID="0d687a1cc19edef86e363b078065411464f70598e58ee47fda14e020c46c6047" Mar 18 08:49:53.534187 master-0 kubenswrapper[6976]: I0318 08:49:53.534130 6976 scope.go:117] "RemoveContainer" containerID="e2df9cabd8b88e525a338baa668bc5602df79a0ef876cd2f299ee939dbfff1e9" Mar 18 08:49:53.534760 master-0 kubenswrapper[6976]: E0318 08:49:53.534696 6976 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2df9cabd8b88e525a338baa668bc5602df79a0ef876cd2f299ee939dbfff1e9\": container with ID starting with e2df9cabd8b88e525a338baa668bc5602df79a0ef876cd2f299ee939dbfff1e9 not found: ID does not exist" containerID="e2df9cabd8b88e525a338baa668bc5602df79a0ef876cd2f299ee939dbfff1e9" Mar 18 08:49:53.534887 master-0 kubenswrapper[6976]: I0318 08:49:53.534756 6976 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2df9cabd8b88e525a338baa668bc5602df79a0ef876cd2f299ee939dbfff1e9"} err="failed to get container status \"e2df9cabd8b88e525a338baa668bc5602df79a0ef876cd2f299ee939dbfff1e9\": rpc error: code = NotFound desc = could not find container \"e2df9cabd8b88e525a338baa668bc5602df79a0ef876cd2f299ee939dbfff1e9\": container with ID starting with e2df9cabd8b88e525a338baa668bc5602df79a0ef876cd2f299ee939dbfff1e9 not found: ID does not exist" Mar 18 08:49:53.534887 master-0 kubenswrapper[6976]: I0318 08:49:53.534793 6976 scope.go:117] "RemoveContainer" containerID="0d687a1cc19edef86e363b078065411464f70598e58ee47fda14e020c46c6047" Mar 18 08:49:53.535294 master-0 kubenswrapper[6976]: E0318 08:49:53.535237 6976 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d687a1cc19edef86e363b078065411464f70598e58ee47fda14e020c46c6047\": container with ID starting with 0d687a1cc19edef86e363b078065411464f70598e58ee47fda14e020c46c6047 not found: ID does not exist" containerID="0d687a1cc19edef86e363b078065411464f70598e58ee47fda14e020c46c6047" Mar 18 08:49:53.535294 master-0 kubenswrapper[6976]: I0318 08:49:53.535279 6976 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d687a1cc19edef86e363b078065411464f70598e58ee47fda14e020c46c6047"} err="failed to get container status \"0d687a1cc19edef86e363b078065411464f70598e58ee47fda14e020c46c6047\": rpc error: code = NotFound desc = could not find container \"0d687a1cc19edef86e363b078065411464f70598e58ee47fda14e020c46c6047\": container with ID starting with 0d687a1cc19edef86e363b078065411464f70598e58ee47fda14e020c46c6047 not found: ID does not exist" Mar 18 08:49:54.607318 master-0 kubenswrapper[6976]: I0318 08:49:54.607239 6976 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d664a6d0d2a24360dee10612610f1b59" path="/var/lib/kubelet/pods/d664a6d0d2a24360dee10612610f1b59/volumes" Mar 18 08:49:54.608142 master-0 kubenswrapper[6976]: I0318 08:49:54.607863 6976 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 08:49:55.506921 master-0 kubenswrapper[6976]: I0318 08:49:55.506861 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_38b830ff-8938-4f21-8977-c29a19c85afb/installer/0.log" Mar 18 08:49:55.506921 master-0 kubenswrapper[6976]: I0318 08:49:55.506926 6976 generic.go:334] "Generic (PLEG): container finished" podID="38b830ff-8938-4f21-8977-c29a19c85afb" containerID="b28f4dc9cd44e68014d536f9ea9c8387108c84bc538f43d2e6bb244d9d074b11" exitCode=1 Mar 18 08:49:55.509709 master-0 kubenswrapper[6976]: I0318 08:49:55.509660 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_b75d3625-4131-465d-a8e2-4c42588c7630/installer/0.log" Mar 18 08:49:55.509846 master-0 kubenswrapper[6976]: I0318 08:49:55.509715 6976 generic.go:334] "Generic (PLEG): container finished" podID="b75d3625-4131-465d-a8e2-4c42588c7630" containerID="f10ab16270a7803054be2d271744f71e45d5e3fab77e472706ee3fb055b353ea" exitCode=1 Mar 18 08:49:56.544149 master-0 kubenswrapper[6976]: E0318 08:49:56.543716 6976 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.189de3505b6568a4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:49:22.524104868 +0000 UTC m=+62.109706473,LastTimestamp:2026-03-18 08:49:22.524104868 +0000 UTC m=+62.109706473,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:49:56.783419 master-0 kubenswrapper[6976]: I0318 08:49:56.783289 6976 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 08:49:58.561330 master-0 kubenswrapper[6976]: I0318 08:49:58.561228 6976 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-j75sc container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" start-of-body= Mar 18 08:49:58.562516 master-0 kubenswrapper[6976]: I0318 08:49:58.561364 6976 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" podUID="e86268c9-7a83-4ccb-979a-feff00cb4b3e" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.20:8443/healthz\": dial tcp 10.128.0.20:8443: connect: connection refused" Mar 18 08:50:01.568972 master-0 kubenswrapper[6976]: I0318 08:50:01.568784 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-6rtpx_8b779ce3-07c4-45ca-b1ca-750c95ed3d0b/network-operator/0.log" Mar 18 08:50:01.568972 master-0 kubenswrapper[6976]: I0318 08:50:01.568897 6976 generic.go:334] "Generic (PLEG): container finished" podID="8b779ce3-07c4-45ca-b1ca-750c95ed3d0b" containerID="fd295b6b7843cd03ce43cecd7dcd871e030a3bf9af1473694567c5a5799d4c76" exitCode=255 Mar 18 08:50:02.228158 master-0 kubenswrapper[6976]: E0318 08:50:02.228062 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:50:02.640315 master-0 kubenswrapper[6976]: E0318 08:50:02.640040 6976 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:50:03.478816 master-0 kubenswrapper[6976]: E0318 08:50:03.478715 6976 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 08:50:04.592026 master-0 kubenswrapper[6976]: I0318 08:50:04.591957 6976 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="512a999778aeba262c615ce98f4b7e30d2e5304b6c496908178b7d3a73d7fb2e" exitCode=0 Mar 18 08:50:06.609364 master-0 kubenswrapper[6976]: I0318 08:50:06.609317 6976 generic.go:334] "Generic (PLEG): container finished" podID="65cff83a-8d8f-4e4f-96ef-99941c29ba53" containerID="e7040e73164a56f089f0acc8e8f60bd6ac708b6b6770784a34fbb303688099ef" exitCode=0 Mar 18 08:50:06.784368 master-0 kubenswrapper[6976]: I0318 08:50:06.784175 6976 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 08:50:12.228551 master-0 kubenswrapper[6976]: E0318 08:50:12.228490 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:50:12.637895 master-0 kubenswrapper[6976]: I0318 08:50:12.637728 6976 generic.go:334] "Generic (PLEG): container finished" podID="e86268c9-7a83-4ccb-979a-feff00cb4b3e" containerID="3d8ad760fd65b5f908f215917d29cb979183ae8b744d77b212db9eae8e3db7ef" exitCode=0 Mar 18 08:50:12.640866 master-0 kubenswrapper[6976]: E0318 08:50:12.640805 6976 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:50:13.645851 master-0 kubenswrapper[6976]: I0318 08:50:13.645763 6976 generic.go:334] "Generic (PLEG): container finished" podID="be2682e4-cb63-4102-a83e-ef28023e273a" containerID="8ff399eba975fe3e4ac2c3d81b3e52845b1835ad72d3a17e7e74d5e7eca9397d" exitCode=0 Mar 18 08:50:13.648226 master-0 kubenswrapper[6976]: I0318 08:50:13.648171 6976 generic.go:334] "Generic (PLEG): container finished" podID="1df9560e-21f0-44fe-bb51-4bc0fde4a3ac" containerID="cdf9805777db651916bc0fbdb03aeca74e0291990d89a5792cd9c2058bcbad82" exitCode=0 Mar 18 08:50:19.678725 master-0 kubenswrapper[6976]: I0318 08:50:19.678648 6976 generic.go:334] "Generic (PLEG): container finished" podID="bb6ef4c4-bff3-4559-8e42-582bbd668b7c" containerID="9cdce5f3b67476e4d83692d6a7f121d082ca7bc4e1f5227b44f8955003a46e71" exitCode=0 Mar 18 08:50:20.688049 master-0 kubenswrapper[6976]: I0318 08:50:20.687983 6976 generic.go:334] "Generic (PLEG): container finished" podID="0f9ba06c-7a6b-4f46-a747-80b0a0b58600" containerID="e101758dad1868c5a7ecd290b1cfffd6e710b7c13cfdccb7b41fe00e23534e6d" exitCode=0 Mar 18 08:50:22.229223 master-0 kubenswrapper[6976]: E0318 08:50:22.229113 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:50:22.229223 master-0 kubenswrapper[6976]: E0318 08:50:22.229182 6976 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 08:50:22.602369 master-0 kubenswrapper[6976]: I0318 08:50:22.602203 6976 status_manager.go:851] "Failed to get status for pod" podUID="b2588f5c-327c-49cc-8cfb-0cce1ad758d5" pod="openshift-dns/dns-default-pj485" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods dns-default-pj485)" Mar 18 08:50:22.642207 master-0 kubenswrapper[6976]: E0318 08:50:22.642084 6976 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:50:22.642207 master-0 kubenswrapper[6976]: I0318 08:50:22.642213 6976 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 18 08:50:24.712746 master-0 kubenswrapper[6976]: I0318 08:50:24.712684 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-lf7kq_57affd8b-d1ce-40d2-b31e-7b18645ca7b6/approver/0.log" Mar 18 08:50:24.713627 master-0 kubenswrapper[6976]: I0318 08:50:24.713292 6976 generic.go:334] "Generic (PLEG): container finished" podID="57affd8b-d1ce-40d2-b31e-7b18645ca7b6" containerID="7a5f71287e8b5eb717808046e6ba2bfb7e60eb4819b757b6fc0b37c1ed02f420" exitCode=1 Mar 18 08:50:26.652954 master-0 kubenswrapper[6976]: E0318 08:50:26.652890 6976 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 08:50:26.652954 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-68f85b4d6c-hhn7l_openshift-operator-lifecycle-manager_f6833a48-fccb-42bd-ac90-29f08d5bf7e8_0(2b07b6292ab080ebbae5c5c015666fb198b9b71e8fdd472d22d84a33f3740784): error adding pod openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-hhn7l to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2b07b6292ab080ebbae5c5c015666fb198b9b71e8fdd472d22d84a33f3740784" Netns:"/var/run/netns/6b832087-03ec-45f5-8370-ea43fe2174c8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-68f85b4d6c-hhn7l;K8S_POD_INFRA_CONTAINER_ID=2b07b6292ab080ebbae5c5c015666fb198b9b71e8fdd472d22d84a33f3740784;K8S_POD_UID=f6833a48-fccb-42bd-ac90-29f08d5bf7e8" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l/f6833a48-fccb-42bd-ac90-29f08d5bf7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-68f85b4d6c-hhn7l in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-68f85b4d6c-hhn7l in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68f85b4d6c-hhn7l?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:50:26.652954 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:50:26.652954 master-0 kubenswrapper[6976]: > Mar 18 08:50:26.653447 master-0 kubenswrapper[6976]: E0318 08:50:26.653086 6976 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 08:50:26.653447 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-68f85b4d6c-hhn7l_openshift-operator-lifecycle-manager_f6833a48-fccb-42bd-ac90-29f08d5bf7e8_0(2b07b6292ab080ebbae5c5c015666fb198b9b71e8fdd472d22d84a33f3740784): error adding pod openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-hhn7l to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2b07b6292ab080ebbae5c5c015666fb198b9b71e8fdd472d22d84a33f3740784" Netns:"/var/run/netns/6b832087-03ec-45f5-8370-ea43fe2174c8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-68f85b4d6c-hhn7l;K8S_POD_INFRA_CONTAINER_ID=2b07b6292ab080ebbae5c5c015666fb198b9b71e8fdd472d22d84a33f3740784;K8S_POD_UID=f6833a48-fccb-42bd-ac90-29f08d5bf7e8" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l/f6833a48-fccb-42bd-ac90-29f08d5bf7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-68f85b4d6c-hhn7l in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-68f85b4d6c-hhn7l in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68f85b4d6c-hhn7l?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:50:26.653447 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:50:26.653447 master-0 kubenswrapper[6976]: > pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:50:26.653447 master-0 kubenswrapper[6976]: E0318 08:50:26.653134 6976 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 08:50:26.653447 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-68f85b4d6c-hhn7l_openshift-operator-lifecycle-manager_f6833a48-fccb-42bd-ac90-29f08d5bf7e8_0(2b07b6292ab080ebbae5c5c015666fb198b9b71e8fdd472d22d84a33f3740784): error adding pod openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-hhn7l to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2b07b6292ab080ebbae5c5c015666fb198b9b71e8fdd472d22d84a33f3740784" Netns:"/var/run/netns/6b832087-03ec-45f5-8370-ea43fe2174c8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-68f85b4d6c-hhn7l;K8S_POD_INFRA_CONTAINER_ID=2b07b6292ab080ebbae5c5c015666fb198b9b71e8fdd472d22d84a33f3740784;K8S_POD_UID=f6833a48-fccb-42bd-ac90-29f08d5bf7e8" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l/f6833a48-fccb-42bd-ac90-29f08d5bf7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-68f85b4d6c-hhn7l in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-68f85b4d6c-hhn7l in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68f85b4d6c-hhn7l?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:50:26.653447 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:50:26.653447 master-0 kubenswrapper[6976]: > pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:50:26.655330 master-0 kubenswrapper[6976]: E0318 08:50:26.653245 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"catalog-operator-68f85b4d6c-hhn7l_openshift-operator-lifecycle-manager(f6833a48-fccb-42bd-ac90-29f08d5bf7e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"catalog-operator-68f85b4d6c-hhn7l_openshift-operator-lifecycle-manager(f6833a48-fccb-42bd-ac90-29f08d5bf7e8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-68f85b4d6c-hhn7l_openshift-operator-lifecycle-manager_f6833a48-fccb-42bd-ac90-29f08d5bf7e8_0(2b07b6292ab080ebbae5c5c015666fb198b9b71e8fdd472d22d84a33f3740784): error adding pod openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-hhn7l to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"2b07b6292ab080ebbae5c5c015666fb198b9b71e8fdd472d22d84a33f3740784\\\" Netns:\\\"/var/run/netns/6b832087-03ec-45f5-8370-ea43fe2174c8\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-68f85b4d6c-hhn7l;K8S_POD_INFRA_CONTAINER_ID=2b07b6292ab080ebbae5c5c015666fb198b9b71e8fdd472d22d84a33f3740784;K8S_POD_UID=f6833a48-fccb-42bd-ac90-29f08d5bf7e8\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l/f6833a48-fccb-42bd-ac90-29f08d5bf7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-68f85b4d6c-hhn7l in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-68f85b4d6c-hhn7l in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68f85b4d6c-hhn7l?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" podUID="f6833a48-fccb-42bd-ac90-29f08d5bf7e8" Mar 18 08:50:26.742656 master-0 kubenswrapper[6976]: E0318 08:50:26.742437 6976 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 08:50:26.742656 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_olm-operator-5c9796789-twp27_openshift-operator-lifecycle-manager_c00ee838-424f-482b-942f-08f0952a5ccd_0(de45cfd7839fd9396d7ba8308191a4500af5e215d877da851784bf39a5fe77d7): error adding pod openshift-operator-lifecycle-manager_olm-operator-5c9796789-twp27 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"de45cfd7839fd9396d7ba8308191a4500af5e215d877da851784bf39a5fe77d7" Netns:"/var/run/netns/ea74be90-e964-4152-887f-3475b5f50f6e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-5c9796789-twp27;K8S_POD_INFRA_CONTAINER_ID=de45cfd7839fd9396d7ba8308191a4500af5e215d877da851784bf39a5fe77d7;K8S_POD_UID=c00ee838-424f-482b-942f-08f0952a5ccd" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27] networking: Multus: [openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27/c00ee838-424f-482b-942f-08f0952a5ccd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod olm-operator-5c9796789-twp27 in out of cluster comm: SetNetworkStatus: failed to update the pod olm-operator-5c9796789-twp27 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-5c9796789-twp27?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:50:26.742656 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:50:26.742656 master-0 kubenswrapper[6976]: > Mar 18 08:50:26.742656 master-0 kubenswrapper[6976]: E0318 08:50:26.742505 6976 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 08:50:26.742656 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_olm-operator-5c9796789-twp27_openshift-operator-lifecycle-manager_c00ee838-424f-482b-942f-08f0952a5ccd_0(de45cfd7839fd9396d7ba8308191a4500af5e215d877da851784bf39a5fe77d7): error adding pod openshift-operator-lifecycle-manager_olm-operator-5c9796789-twp27 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"de45cfd7839fd9396d7ba8308191a4500af5e215d877da851784bf39a5fe77d7" Netns:"/var/run/netns/ea74be90-e964-4152-887f-3475b5f50f6e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-5c9796789-twp27;K8S_POD_INFRA_CONTAINER_ID=de45cfd7839fd9396d7ba8308191a4500af5e215d877da851784bf39a5fe77d7;K8S_POD_UID=c00ee838-424f-482b-942f-08f0952a5ccd" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27] networking: Multus: [openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27/c00ee838-424f-482b-942f-08f0952a5ccd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod olm-operator-5c9796789-twp27 in out of cluster comm: SetNetworkStatus: failed to update the pod olm-operator-5c9796789-twp27 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-5c9796789-twp27?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:50:26.742656 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:50:26.742656 master-0 kubenswrapper[6976]: > pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:50:26.742656 master-0 kubenswrapper[6976]: E0318 08:50:26.742526 6976 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 08:50:26.742656 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_olm-operator-5c9796789-twp27_openshift-operator-lifecycle-manager_c00ee838-424f-482b-942f-08f0952a5ccd_0(de45cfd7839fd9396d7ba8308191a4500af5e215d877da851784bf39a5fe77d7): error adding pod openshift-operator-lifecycle-manager_olm-operator-5c9796789-twp27 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"de45cfd7839fd9396d7ba8308191a4500af5e215d877da851784bf39a5fe77d7" Netns:"/var/run/netns/ea74be90-e964-4152-887f-3475b5f50f6e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-5c9796789-twp27;K8S_POD_INFRA_CONTAINER_ID=de45cfd7839fd9396d7ba8308191a4500af5e215d877da851784bf39a5fe77d7;K8S_POD_UID=c00ee838-424f-482b-942f-08f0952a5ccd" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27] networking: Multus: [openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27/c00ee838-424f-482b-942f-08f0952a5ccd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod olm-operator-5c9796789-twp27 in out of cluster comm: SetNetworkStatus: failed to update the pod olm-operator-5c9796789-twp27 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-5c9796789-twp27?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:50:26.742656 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:50:26.742656 master-0 kubenswrapper[6976]: > pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:50:26.742656 master-0 kubenswrapper[6976]: E0318 08:50:26.742598 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"olm-operator-5c9796789-twp27_openshift-operator-lifecycle-manager(c00ee838-424f-482b-942f-08f0952a5ccd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"olm-operator-5c9796789-twp27_openshift-operator-lifecycle-manager(c00ee838-424f-482b-942f-08f0952a5ccd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_olm-operator-5c9796789-twp27_openshift-operator-lifecycle-manager_c00ee838-424f-482b-942f-08f0952a5ccd_0(de45cfd7839fd9396d7ba8308191a4500af5e215d877da851784bf39a5fe77d7): error adding pod openshift-operator-lifecycle-manager_olm-operator-5c9796789-twp27 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"de45cfd7839fd9396d7ba8308191a4500af5e215d877da851784bf39a5fe77d7\\\" Netns:\\\"/var/run/netns/ea74be90-e964-4152-887f-3475b5f50f6e\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-5c9796789-twp27;K8S_POD_INFRA_CONTAINER_ID=de45cfd7839fd9396d7ba8308191a4500af5e215d877da851784bf39a5fe77d7;K8S_POD_UID=c00ee838-424f-482b-942f-08f0952a5ccd\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27] networking: Multus: [openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27/c00ee838-424f-482b-942f-08f0952a5ccd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod olm-operator-5c9796789-twp27 in out of cluster comm: SetNetworkStatus: failed to update the pod olm-operator-5c9796789-twp27 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-5c9796789-twp27?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" podUID="c00ee838-424f-482b-942f-08f0952a5ccd" Mar 18 08:50:26.766298 master-0 kubenswrapper[6976]: E0318 08:50:26.766238 6976 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 08:50:26.766298 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-2xs9n_openshift-multus_e48101ca-f356-45e3-93d7-4e17b8d8066c_0(beec5f1dfea138b1a032818a0d53bb862c55685e33207179239caaf90a30eae3): error adding pod openshift-multus_network-metrics-daemon-2xs9n to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"beec5f1dfea138b1a032818a0d53bb862c55685e33207179239caaf90a30eae3" Netns:"/var/run/netns/ade42a03-0df8-4f95-b98e-fb648c7803c5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-2xs9n;K8S_POD_INFRA_CONTAINER_ID=beec5f1dfea138b1a032818a0d53bb862c55685e33207179239caaf90a30eae3;K8S_POD_UID=e48101ca-f356-45e3-93d7-4e17b8d8066c" Path:"" ERRORED: error configuring pod [openshift-multus/network-metrics-daemon-2xs9n] networking: Multus: [openshift-multus/network-metrics-daemon-2xs9n/e48101ca-f356-45e3-93d7-4e17b8d8066c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-metrics-daemon-2xs9n in out of cluster comm: SetNetworkStatus: failed to update the pod network-metrics-daemon-2xs9n in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-2xs9n?timeout=1m0s": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 18 08:50:26.766298 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:50:26.766298 master-0 kubenswrapper[6976]: > Mar 18 08:50:26.766454 master-0 kubenswrapper[6976]: E0318 08:50:26.766323 6976 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 08:50:26.766454 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-2xs9n_openshift-multus_e48101ca-f356-45e3-93d7-4e17b8d8066c_0(beec5f1dfea138b1a032818a0d53bb862c55685e33207179239caaf90a30eae3): error adding pod openshift-multus_network-metrics-daemon-2xs9n to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"beec5f1dfea138b1a032818a0d53bb862c55685e33207179239caaf90a30eae3" Netns:"/var/run/netns/ade42a03-0df8-4f95-b98e-fb648c7803c5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-2xs9n;K8S_POD_INFRA_CONTAINER_ID=beec5f1dfea138b1a032818a0d53bb862c55685e33207179239caaf90a30eae3;K8S_POD_UID=e48101ca-f356-45e3-93d7-4e17b8d8066c" Path:"" ERRORED: error configuring pod [openshift-multus/network-metrics-daemon-2xs9n] networking: Multus: [openshift-multus/network-metrics-daemon-2xs9n/e48101ca-f356-45e3-93d7-4e17b8d8066c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-metrics-daemon-2xs9n in out of cluster comm: SetNetworkStatus: failed to update the pod network-metrics-daemon-2xs9n in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-2xs9n?timeout=1m0s": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 18 08:50:26.766454 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:50:26.766454 master-0 kubenswrapper[6976]: > pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:50:26.766454 master-0 kubenswrapper[6976]: E0318 08:50:26.766347 6976 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 08:50:26.766454 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-2xs9n_openshift-multus_e48101ca-f356-45e3-93d7-4e17b8d8066c_0(beec5f1dfea138b1a032818a0d53bb862c55685e33207179239caaf90a30eae3): error adding pod openshift-multus_network-metrics-daemon-2xs9n to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"beec5f1dfea138b1a032818a0d53bb862c55685e33207179239caaf90a30eae3" Netns:"/var/run/netns/ade42a03-0df8-4f95-b98e-fb648c7803c5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-2xs9n;K8S_POD_INFRA_CONTAINER_ID=beec5f1dfea138b1a032818a0d53bb862c55685e33207179239caaf90a30eae3;K8S_POD_UID=e48101ca-f356-45e3-93d7-4e17b8d8066c" Path:"" ERRORED: error configuring pod [openshift-multus/network-metrics-daemon-2xs9n] networking: Multus: [openshift-multus/network-metrics-daemon-2xs9n/e48101ca-f356-45e3-93d7-4e17b8d8066c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-metrics-daemon-2xs9n in out of cluster comm: SetNetworkStatus: failed to update the pod network-metrics-daemon-2xs9n in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-2xs9n?timeout=1m0s": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 18 08:50:26.766454 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:50:26.766454 master-0 kubenswrapper[6976]: > pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:50:26.766454 master-0 kubenswrapper[6976]: E0318 08:50:26.766402 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-2xs9n_openshift-multus(e48101ca-f356-45e3-93d7-4e17b8d8066c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-2xs9n_openshift-multus(e48101ca-f356-45e3-93d7-4e17b8d8066c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-2xs9n_openshift-multus_e48101ca-f356-45e3-93d7-4e17b8d8066c_0(beec5f1dfea138b1a032818a0d53bb862c55685e33207179239caaf90a30eae3): error adding pod openshift-multus_network-metrics-daemon-2xs9n to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"beec5f1dfea138b1a032818a0d53bb862c55685e33207179239caaf90a30eae3\\\" Netns:\\\"/var/run/netns/ade42a03-0df8-4f95-b98e-fb648c7803c5\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-2xs9n;K8S_POD_INFRA_CONTAINER_ID=beec5f1dfea138b1a032818a0d53bb862c55685e33207179239caaf90a30eae3;K8S_POD_UID=e48101ca-f356-45e3-93d7-4e17b8d8066c\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-multus/network-metrics-daemon-2xs9n] networking: Multus: [openshift-multus/network-metrics-daemon-2xs9n/e48101ca-f356-45e3-93d7-4e17b8d8066c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-metrics-daemon-2xs9n in out of cluster comm: SetNetworkStatus: failed to update the pod network-metrics-daemon-2xs9n in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-2xs9n?timeout=1m0s\\\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:50:26.932889 master-0 kubenswrapper[6976]: E0318 08:50:26.932780 6976 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 08:50:26.932889 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-monitoring-operator-58845fbb57-8vfjr_openshift-monitoring_09269324-c908-474d-818f-5cd49406f1e2_0(06b002626d9616d06e2a567585a7fa7a3f9548bb3746097a758bfee3b2531a37): error adding pod openshift-monitoring_cluster-monitoring-operator-58845fbb57-8vfjr to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"06b002626d9616d06e2a567585a7fa7a3f9548bb3746097a758bfee3b2531a37" Netns:"/var/run/netns/41f700f3-bc60-45cd-b777-1fc4b3928881" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=cluster-monitoring-operator-58845fbb57-8vfjr;K8S_POD_INFRA_CONTAINER_ID=06b002626d9616d06e2a567585a7fa7a3f9548bb3746097a758bfee3b2531a37;K8S_POD_UID=09269324-c908-474d-818f-5cd49406f1e2" Path:"" ERRORED: error configuring pod [openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr] networking: Multus: [openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr/09269324-c908-474d-818f-5cd49406f1e2]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-monitoring-operator-58845fbb57-8vfjr in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-monitoring-operator-58845fbb57-8vfjr in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/cluster-monitoring-operator-58845fbb57-8vfjr?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:50:26.932889 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:50:26.932889 master-0 kubenswrapper[6976]: > Mar 18 08:50:26.933027 master-0 kubenswrapper[6976]: E0318 08:50:26.932941 6976 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 08:50:26.933027 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-monitoring-operator-58845fbb57-8vfjr_openshift-monitoring_09269324-c908-474d-818f-5cd49406f1e2_0(06b002626d9616d06e2a567585a7fa7a3f9548bb3746097a758bfee3b2531a37): error adding pod openshift-monitoring_cluster-monitoring-operator-58845fbb57-8vfjr to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"06b002626d9616d06e2a567585a7fa7a3f9548bb3746097a758bfee3b2531a37" Netns:"/var/run/netns/41f700f3-bc60-45cd-b777-1fc4b3928881" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=cluster-monitoring-operator-58845fbb57-8vfjr;K8S_POD_INFRA_CONTAINER_ID=06b002626d9616d06e2a567585a7fa7a3f9548bb3746097a758bfee3b2531a37;K8S_POD_UID=09269324-c908-474d-818f-5cd49406f1e2" Path:"" ERRORED: error configuring pod [openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr] networking: Multus: [openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr/09269324-c908-474d-818f-5cd49406f1e2]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-monitoring-operator-58845fbb57-8vfjr in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-monitoring-operator-58845fbb57-8vfjr in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/cluster-monitoring-operator-58845fbb57-8vfjr?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:50:26.933027 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:50:26.933027 master-0 kubenswrapper[6976]: > pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:50:26.933027 master-0 kubenswrapper[6976]: E0318 08:50:26.932991 6976 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 08:50:26.933027 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-monitoring-operator-58845fbb57-8vfjr_openshift-monitoring_09269324-c908-474d-818f-5cd49406f1e2_0(06b002626d9616d06e2a567585a7fa7a3f9548bb3746097a758bfee3b2531a37): error adding pod openshift-monitoring_cluster-monitoring-operator-58845fbb57-8vfjr to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"06b002626d9616d06e2a567585a7fa7a3f9548bb3746097a758bfee3b2531a37" Netns:"/var/run/netns/41f700f3-bc60-45cd-b777-1fc4b3928881" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=cluster-monitoring-operator-58845fbb57-8vfjr;K8S_POD_INFRA_CONTAINER_ID=06b002626d9616d06e2a567585a7fa7a3f9548bb3746097a758bfee3b2531a37;K8S_POD_UID=09269324-c908-474d-818f-5cd49406f1e2" Path:"" ERRORED: error configuring pod [openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr] networking: Multus: [openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr/09269324-c908-474d-818f-5cd49406f1e2]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-monitoring-operator-58845fbb57-8vfjr in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-monitoring-operator-58845fbb57-8vfjr in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/cluster-monitoring-operator-58845fbb57-8vfjr?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:50:26.933027 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:50:26.933027 master-0 kubenswrapper[6976]: > pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:50:26.938682 master-0 kubenswrapper[6976]: E0318 08:50:26.933136 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-monitoring-operator-58845fbb57-8vfjr_openshift-monitoring(09269324-c908-474d-818f-5cd49406f1e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-monitoring-operator-58845fbb57-8vfjr_openshift-monitoring(09269324-c908-474d-818f-5cd49406f1e2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-monitoring-operator-58845fbb57-8vfjr_openshift-monitoring_09269324-c908-474d-818f-5cd49406f1e2_0(06b002626d9616d06e2a567585a7fa7a3f9548bb3746097a758bfee3b2531a37): error adding pod openshift-monitoring_cluster-monitoring-operator-58845fbb57-8vfjr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"06b002626d9616d06e2a567585a7fa7a3f9548bb3746097a758bfee3b2531a37\\\" Netns:\\\"/var/run/netns/41f700f3-bc60-45cd-b777-1fc4b3928881\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=cluster-monitoring-operator-58845fbb57-8vfjr;K8S_POD_INFRA_CONTAINER_ID=06b002626d9616d06e2a567585a7fa7a3f9548bb3746097a758bfee3b2531a37;K8S_POD_UID=09269324-c908-474d-818f-5cd49406f1e2\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr] networking: Multus: [openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr/09269324-c908-474d-818f-5cd49406f1e2]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-monitoring-operator-58845fbb57-8vfjr in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-monitoring-operator-58845fbb57-8vfjr in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/cluster-monitoring-operator-58845fbb57-8vfjr?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" podUID="09269324-c908-474d-818f-5cd49406f1e2" Mar 18 08:50:26.957210 master-0 kubenswrapper[6976]: E0318 08:50:26.956812 6976 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 08:50:26.957210 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-89ccd998f-m862c_openshift-marketplace_ca9d4694-8675-47c5-819f-89bba9dcdc0f_0(decb5768c8951925d3cc13e1d3590195b7c04210e7ae447c35626547730f28aa): error adding pod openshift-marketplace_marketplace-operator-89ccd998f-m862c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"decb5768c8951925d3cc13e1d3590195b7c04210e7ae447c35626547730f28aa" Netns:"/var/run/netns/1e63a24b-aa36-42c8-9097-66fba7cf60da" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-89ccd998f-m862c;K8S_POD_INFRA_CONTAINER_ID=decb5768c8951925d3cc13e1d3590195b7c04210e7ae447c35626547730f28aa;K8S_POD_UID=ca9d4694-8675-47c5-819f-89bba9dcdc0f" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-89ccd998f-m862c] networking: Multus: [openshift-marketplace/marketplace-operator-89ccd998f-m862c/ca9d4694-8675-47c5-819f-89bba9dcdc0f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-89ccd998f-m862c in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-89ccd998f-m862c in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-89ccd998f-m862c?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:50:26.957210 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:50:26.957210 master-0 kubenswrapper[6976]: > Mar 18 08:50:26.957210 master-0 kubenswrapper[6976]: E0318 08:50:26.956891 6976 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 08:50:26.957210 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-89ccd998f-m862c_openshift-marketplace_ca9d4694-8675-47c5-819f-89bba9dcdc0f_0(decb5768c8951925d3cc13e1d3590195b7c04210e7ae447c35626547730f28aa): error adding pod openshift-marketplace_marketplace-operator-89ccd998f-m862c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"decb5768c8951925d3cc13e1d3590195b7c04210e7ae447c35626547730f28aa" Netns:"/var/run/netns/1e63a24b-aa36-42c8-9097-66fba7cf60da" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-89ccd998f-m862c;K8S_POD_INFRA_CONTAINER_ID=decb5768c8951925d3cc13e1d3590195b7c04210e7ae447c35626547730f28aa;K8S_POD_UID=ca9d4694-8675-47c5-819f-89bba9dcdc0f" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-89ccd998f-m862c] networking: Multus: [openshift-marketplace/marketplace-operator-89ccd998f-m862c/ca9d4694-8675-47c5-819f-89bba9dcdc0f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-89ccd998f-m862c in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-89ccd998f-m862c in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-89ccd998f-m862c?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:50:26.957210 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:50:26.957210 master-0 kubenswrapper[6976]: > pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:50:26.957210 master-0 kubenswrapper[6976]: E0318 08:50:26.956914 6976 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 08:50:26.957210 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-89ccd998f-m862c_openshift-marketplace_ca9d4694-8675-47c5-819f-89bba9dcdc0f_0(decb5768c8951925d3cc13e1d3590195b7c04210e7ae447c35626547730f28aa): error adding pod openshift-marketplace_marketplace-operator-89ccd998f-m862c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"decb5768c8951925d3cc13e1d3590195b7c04210e7ae447c35626547730f28aa" Netns:"/var/run/netns/1e63a24b-aa36-42c8-9097-66fba7cf60da" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-89ccd998f-m862c;K8S_POD_INFRA_CONTAINER_ID=decb5768c8951925d3cc13e1d3590195b7c04210e7ae447c35626547730f28aa;K8S_POD_UID=ca9d4694-8675-47c5-819f-89bba9dcdc0f" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-89ccd998f-m862c] networking: Multus: [openshift-marketplace/marketplace-operator-89ccd998f-m862c/ca9d4694-8675-47c5-819f-89bba9dcdc0f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-89ccd998f-m862c in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-89ccd998f-m862c in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-89ccd998f-m862c?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:50:26.957210 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:50:26.957210 master-0 kubenswrapper[6976]: > pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:50:26.957210 master-0 kubenswrapper[6976]: E0318 08:50:26.956989 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"marketplace-operator-89ccd998f-m862c_openshift-marketplace(ca9d4694-8675-47c5-819f-89bba9dcdc0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"marketplace-operator-89ccd998f-m862c_openshift-marketplace(ca9d4694-8675-47c5-819f-89bba9dcdc0f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-89ccd998f-m862c_openshift-marketplace_ca9d4694-8675-47c5-819f-89bba9dcdc0f_0(decb5768c8951925d3cc13e1d3590195b7c04210e7ae447c35626547730f28aa): error adding pod openshift-marketplace_marketplace-operator-89ccd998f-m862c to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"decb5768c8951925d3cc13e1d3590195b7c04210e7ae447c35626547730f28aa\\\" Netns:\\\"/var/run/netns/1e63a24b-aa36-42c8-9097-66fba7cf60da\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-89ccd998f-m862c;K8S_POD_INFRA_CONTAINER_ID=decb5768c8951925d3cc13e1d3590195b7c04210e7ae447c35626547730f28aa;K8S_POD_UID=ca9d4694-8675-47c5-819f-89bba9dcdc0f\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-89ccd998f-m862c] networking: Multus: [openshift-marketplace/marketplace-operator-89ccd998f-m862c/ca9d4694-8675-47c5-819f-89bba9dcdc0f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-89ccd998f-m862c in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-89ccd998f-m862c in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-89ccd998f-m862c?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" podUID="ca9d4694-8675-47c5-819f-89bba9dcdc0f" Mar 18 08:50:26.959111 master-0 kubenswrapper[6976]: E0318 08:50:26.959018 6976 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 08:50:26.959111 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-5dbbb8b86f-25rbq_openshift-multus_7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac_0(35b5b392421ffc8738d7493e132eabf256f003677c823495d9690b9d051b87a9): error adding pod openshift-multus_multus-admission-controller-5dbbb8b86f-25rbq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"35b5b392421ffc8738d7493e132eabf256f003677c823495d9690b9d051b87a9" Netns:"/var/run/netns/6b40573f-01d0-4f43-8ee7-566042e27740" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-5dbbb8b86f-25rbq;K8S_POD_INFRA_CONTAINER_ID=35b5b392421ffc8738d7493e132eabf256f003677c823495d9690b9d051b87a9;K8S_POD_UID=7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq] networking: Multus: [openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod multus-admission-controller-5dbbb8b86f-25rbq in out of cluster comm: SetNetworkStatus: failed to update the pod multus-admission-controller-5dbbb8b86f-25rbq in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-5dbbb8b86f-25rbq?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:50:26.959111 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:50:26.959111 master-0 kubenswrapper[6976]: > Mar 18 08:50:26.959702 master-0 kubenswrapper[6976]: E0318 08:50:26.959116 6976 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 08:50:26.959702 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-5dbbb8b86f-25rbq_openshift-multus_7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac_0(35b5b392421ffc8738d7493e132eabf256f003677c823495d9690b9d051b87a9): error adding pod openshift-multus_multus-admission-controller-5dbbb8b86f-25rbq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"35b5b392421ffc8738d7493e132eabf256f003677c823495d9690b9d051b87a9" Netns:"/var/run/netns/6b40573f-01d0-4f43-8ee7-566042e27740" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-5dbbb8b86f-25rbq;K8S_POD_INFRA_CONTAINER_ID=35b5b392421ffc8738d7493e132eabf256f003677c823495d9690b9d051b87a9;K8S_POD_UID=7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq] networking: Multus: [openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod multus-admission-controller-5dbbb8b86f-25rbq in out of cluster comm: SetNetworkStatus: failed to update the pod multus-admission-controller-5dbbb8b86f-25rbq in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-5dbbb8b86f-25rbq?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:50:26.959702 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:50:26.959702 master-0 kubenswrapper[6976]: > pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:50:26.959702 master-0 kubenswrapper[6976]: E0318 08:50:26.959149 6976 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 08:50:26.959702 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-5dbbb8b86f-25rbq_openshift-multus_7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac_0(35b5b392421ffc8738d7493e132eabf256f003677c823495d9690b9d051b87a9): error adding pod openshift-multus_multus-admission-controller-5dbbb8b86f-25rbq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"35b5b392421ffc8738d7493e132eabf256f003677c823495d9690b9d051b87a9" Netns:"/var/run/netns/6b40573f-01d0-4f43-8ee7-566042e27740" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-5dbbb8b86f-25rbq;K8S_POD_INFRA_CONTAINER_ID=35b5b392421ffc8738d7493e132eabf256f003677c823495d9690b9d051b87a9;K8S_POD_UID=7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq] networking: Multus: [openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod multus-admission-controller-5dbbb8b86f-25rbq in out of cluster comm: SetNetworkStatus: failed to update the pod multus-admission-controller-5dbbb8b86f-25rbq in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-5dbbb8b86f-25rbq?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:50:26.959702 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:50:26.959702 master-0 kubenswrapper[6976]: > pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:50:26.959702 master-0 kubenswrapper[6976]: E0318 08:50:26.959240 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"multus-admission-controller-5dbbb8b86f-25rbq_openshift-multus(7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"multus-admission-controller-5dbbb8b86f-25rbq_openshift-multus(7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-5dbbb8b86f-25rbq_openshift-multus_7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac_0(35b5b392421ffc8738d7493e132eabf256f003677c823495d9690b9d051b87a9): error adding pod openshift-multus_multus-admission-controller-5dbbb8b86f-25rbq to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"35b5b392421ffc8738d7493e132eabf256f003677c823495d9690b9d051b87a9\\\" Netns:\\\"/var/run/netns/6b40573f-01d0-4f43-8ee7-566042e27740\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-5dbbb8b86f-25rbq;K8S_POD_INFRA_CONTAINER_ID=35b5b392421ffc8738d7493e132eabf256f003677c823495d9690b9d051b87a9;K8S_POD_UID=7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq] networking: Multus: [openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod multus-admission-controller-5dbbb8b86f-25rbq in out of cluster comm: SetNetworkStatus: failed to update the pod multus-admission-controller-5dbbb8b86f-25rbq in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-5dbbb8b86f-25rbq?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" podUID="7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac" Mar 18 08:50:26.962298 master-0 kubenswrapper[6976]: E0318 08:50:26.962231 6976 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 08:50:26.962298 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-7b95f86987-k6xp5_openshift-operator-lifecycle-manager_2d0da6e3-3887-4361-8eae-e7447f9ff72c_0(fbd912f4aff86737967ecc416b945b4b796d9a809ac9a5afc3df05bc2608603c): error adding pod openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-k6xp5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fbd912f4aff86737967ecc416b945b4b796d9a809ac9a5afc3df05bc2608603c" Netns:"/var/run/netns/495e80fa-eee1-4217-b9c4-b435c7398540" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-7b95f86987-k6xp5;K8S_POD_INFRA_CONTAINER_ID=fbd912f4aff86737967ecc416b945b4b796d9a809ac9a5afc3df05bc2608603c;K8S_POD_UID=2d0da6e3-3887-4361-8eae-e7447f9ff72c" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5/2d0da6e3-3887-4361-8eae-e7447f9ff72c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-7b95f86987-k6xp5 in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-7b95f86987-k6xp5 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-7b95f86987-k6xp5?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:50:26.962298 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:50:26.962298 master-0 kubenswrapper[6976]: > Mar 18 08:50:26.962442 master-0 kubenswrapper[6976]: E0318 08:50:26.962328 6976 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 08:50:26.962442 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-7b95f86987-k6xp5_openshift-operator-lifecycle-manager_2d0da6e3-3887-4361-8eae-e7447f9ff72c_0(fbd912f4aff86737967ecc416b945b4b796d9a809ac9a5afc3df05bc2608603c): error adding pod openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-k6xp5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fbd912f4aff86737967ecc416b945b4b796d9a809ac9a5afc3df05bc2608603c" Netns:"/var/run/netns/495e80fa-eee1-4217-b9c4-b435c7398540" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-7b95f86987-k6xp5;K8S_POD_INFRA_CONTAINER_ID=fbd912f4aff86737967ecc416b945b4b796d9a809ac9a5afc3df05bc2608603c;K8S_POD_UID=2d0da6e3-3887-4361-8eae-e7447f9ff72c" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5/2d0da6e3-3887-4361-8eae-e7447f9ff72c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-7b95f86987-k6xp5 in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-7b95f86987-k6xp5 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-7b95f86987-k6xp5?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:50:26.962442 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:50:26.962442 master-0 kubenswrapper[6976]: > pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:50:26.962442 master-0 kubenswrapper[6976]: E0318 08:50:26.962362 6976 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 08:50:26.962442 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-7b95f86987-k6xp5_openshift-operator-lifecycle-manager_2d0da6e3-3887-4361-8eae-e7447f9ff72c_0(fbd912f4aff86737967ecc416b945b4b796d9a809ac9a5afc3df05bc2608603c): error adding pod openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-k6xp5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fbd912f4aff86737967ecc416b945b4b796d9a809ac9a5afc3df05bc2608603c" Netns:"/var/run/netns/495e80fa-eee1-4217-b9c4-b435c7398540" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-7b95f86987-k6xp5;K8S_POD_INFRA_CONTAINER_ID=fbd912f4aff86737967ecc416b945b4b796d9a809ac9a5afc3df05bc2608603c;K8S_POD_UID=2d0da6e3-3887-4361-8eae-e7447f9ff72c" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5/2d0da6e3-3887-4361-8eae-e7447f9ff72c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-7b95f86987-k6xp5 in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-7b95f86987-k6xp5 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-7b95f86987-k6xp5?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:50:26.962442 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:50:26.962442 master-0 kubenswrapper[6976]: > pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:50:26.962724 master-0 kubenswrapper[6976]: E0318 08:50:26.962479 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"package-server-manager-7b95f86987-k6xp5_openshift-operator-lifecycle-manager(2d0da6e3-3887-4361-8eae-e7447f9ff72c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"package-server-manager-7b95f86987-k6xp5_openshift-operator-lifecycle-manager(2d0da6e3-3887-4361-8eae-e7447f9ff72c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-7b95f86987-k6xp5_openshift-operator-lifecycle-manager_2d0da6e3-3887-4361-8eae-e7447f9ff72c_0(fbd912f4aff86737967ecc416b945b4b796d9a809ac9a5afc3df05bc2608603c): error adding pod openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-k6xp5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"fbd912f4aff86737967ecc416b945b4b796d9a809ac9a5afc3df05bc2608603c\\\" Netns:\\\"/var/run/netns/495e80fa-eee1-4217-b9c4-b435c7398540\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-7b95f86987-k6xp5;K8S_POD_INFRA_CONTAINER_ID=fbd912f4aff86737967ecc416b945b4b796d9a809ac9a5afc3df05bc2608603c;K8S_POD_UID=2d0da6e3-3887-4361-8eae-e7447f9ff72c\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5/2d0da6e3-3887-4361-8eae-e7447f9ff72c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-7b95f86987-k6xp5 in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-7b95f86987-k6xp5 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-7b95f86987-k6xp5?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" podUID="2d0da6e3-3887-4361-8eae-e7447f9ff72c" Mar 18 08:50:28.612159 master-0 kubenswrapper[6976]: E0318 08:50:28.611876 6976 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:50:28.615202 master-0 kubenswrapper[6976]: E0318 08:50:28.612238 6976 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.014s" Mar 18 08:50:28.615202 master-0 kubenswrapper[6976]: I0318 08:50:28.612279 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:50:28.615202 master-0 kubenswrapper[6976]: I0318 08:50:28.612324 6976 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 08:50:28.615202 master-0 kubenswrapper[6976]: I0318 08:50:28.614064 6976 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"e33371633d805977fa012e618334c5eea89b46efb1d6f0253d45e92e47bbf964"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 18 08:50:28.615202 master-0 kubenswrapper[6976]: I0318 08:50:28.614188 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" containerID="cri-o://e33371633d805977fa012e618334c5eea89b46efb1d6f0253d45e92e47bbf964" gracePeriod=30 Mar 18 08:50:28.615202 master-0 kubenswrapper[6976]: I0318 08:50:28.614400 6976 scope.go:117] "RemoveContainer" containerID="3d8ad760fd65b5f908f215917d29cb979183ae8b744d77b212db9eae8e3db7ef" Mar 18 08:50:28.620821 master-0 kubenswrapper[6976]: I0318 08:50:28.620785 6976 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 08:50:29.736623 master-0 kubenswrapper[6976]: I0318 08:50:29.736537 6976 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="e33371633d805977fa012e618334c5eea89b46efb1d6f0253d45e92e47bbf964" exitCode=2 Mar 18 08:50:30.547989 master-0 kubenswrapper[6976]: E0318 08:50:30.547786 6976 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de3535916c2fc kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:49:35.370298108 +0000 UTC m=+74.955899733,LastTimestamp:2026-03-18 08:49:35.370298108 +0000 UTC m=+74.955899733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:50:30.745949 master-0 kubenswrapper[6976]: I0318 08:50:30.745883 6976 generic.go:334] "Generic (PLEG): container finished" podID="81eefe1b-f683-4740-8fb0-0a5050f9b4a4" containerID="b07a3a34e91709be9071f795c0e0650539cb11f6bc35fb3bec049b4bc3051c6c" exitCode=0 Mar 18 08:50:32.643589 master-0 kubenswrapper[6976]: E0318 08:50:32.643444 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 18 08:50:36.208850 master-0 kubenswrapper[6976]: I0318 08:50:36.208753 6976 patch_prober.go:28] interesting pod/etcd-operator-8544cbcf9c-f2nfl container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Mar 18 08:50:36.208850 master-0 kubenswrapper[6976]: I0318 08:50:36.208838 6976 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" podUID="bb6ef4c4-bff3-4559-8e42-582bbd668b7c" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Mar 18 08:50:38.786133 master-0 kubenswrapper[6976]: I0318 08:50:38.786050 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_3253d87f-ae48-42cf-950f-f508a9b82d0d/installer/0.log" Mar 18 08:50:38.786941 master-0 kubenswrapper[6976]: I0318 08:50:38.786142 6976 generic.go:334] "Generic (PLEG): container finished" podID="3253d87f-ae48-42cf-950f-f508a9b82d0d" containerID="f4700f538c7d454f7c9d134fd47d7a5c2ce673d0b9bd02c96a2dfc730672550e" exitCode=1 Mar 18 08:50:42.469719 master-0 kubenswrapper[6976]: E0318 08:50:42.469446 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:50:32Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:50:32Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:50:32Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:50:32Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\\\"],\\\"sizeBytes\\\":1637455533},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36\\\"],\\\"sizeBytes\\\":1238100502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015\\\"],\\\"sizeBytes\\\":991832673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\"],\\\"sizeBytes\\\":943841779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa\\\"],\\\"sizeBytes\\\":876160834},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a9e8da5c6114f062b814936d4db7a47a04d248e160d6bb28ad4e4a081496ee4\\\"],\\\"sizeBytes\\\":772943435},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016\\\"],\\\"sizeBytes\\\":687949580},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d\\\"],\\\"sizeBytes\\\":683195416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a\\\"],\\\"sizeBytes\\\":677942383},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4\\\"],\\\"sizeBytes\\\":621648710},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998\\\"],\\\"sizeBytes\\\":589386806},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55\\\"],\\\"sizeBytes\\\":582154903},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982\\\"],\\\"sizeBytes\\\":558211175},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89\\\"],\\\"sizeBytes\\\":548752816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\"],\\\"sizeBytes\\\":529326739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946\\\"],\\\"sizeBytes\\\":528956487},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"],\\\"sizeBytes\\\":518384969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1\\\"],\\\"sizeBytes\\\":517999161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\"],\\\"sizeBytes\\\":514984269},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427\\\"],\\\"sizeBytes\\\":513221333},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"],\\\"sizeBytes\\\":512274055},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098\\\"],\\\"sizeBytes\\\":511227324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11\\\"],\\\"sizeBytes\\\":511164375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458\\\"],\\\"sizeBytes\\\":508888171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"],\\\"sizeBytes\\\":508544745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71\\\"],\\\"sizeBytes\\\":507972093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3\\\"],\\\"sizeBytes\\\":506480167},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302\\\"],\\\"sizeBytes\\\":506395599},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634\\\"],\\\"sizeBytes\\\":505345991},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\\\"],\\\"sizeBytes\\\":505246690},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252\\\"],\\\"sizeBytes\\\":504625081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69\\\"],\\\"sizeBytes\\\":495994673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023\\\"],\\\"sizeBytes\\\":495065340},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b\\\"],\\\"sizeBytes\\\":487096305},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c4d5a681595e428ff4b5083648c13615eed80be9084a3d3fc68a0295079cb12\\\"],\\\"sizeBytes\\\":484187929},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc\\\"],\\\"sizeBytes\\\":468265024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\\\"],\\\"sizeBytes\\\":465090934},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24\\\"],\\\"sizeBytes\\\":463705930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85\\\"],\\\"sizeBytes\\\":448042136},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951ecfeba9b2da4b653034d09275f925396a79c2d8461b8a7c71c776fee67ba0\\\"],\\\"sizeBytes\\\":443272037},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483\\\"],\\\"sizeBytes\\\":438654374},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e66fd50be6f83ce321a566dfb76f3725b597374077d5af13813b928f6b1267e\\\"],\\\"sizeBytes\\\":411587146},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3a494212f1ba17f0f0980eef583218330eccb56eadf6b8cb0548c76d99b5014\\\"],\\\"sizeBytes\\\":407347125},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422\\\"],\\\"sizeBytes\\\":396521761}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:50:42.844489 master-0 kubenswrapper[6976]: E0318 08:50:42.844278 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 18 08:50:52.470626 master-0 kubenswrapper[6976]: E0318 08:50:52.470517 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:50:53.245744 master-0 kubenswrapper[6976]: E0318 08:50:53.245601 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 18 08:51:02.471227 master-0 kubenswrapper[6976]: E0318 08:51:02.471148 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:51:02.624351 master-0 kubenswrapper[6976]: E0318 08:51:02.624248 6976 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:51:02.624650 master-0 kubenswrapper[6976]: E0318 08:51:02.624549 6976 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.012s" Mar 18 08:51:02.624650 master-0 kubenswrapper[6976]: I0318 08:51:02.624635 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"38b830ff-8938-4f21-8977-c29a19c85afb","Type":"ContainerDied","Data":"b28f4dc9cd44e68014d536f9ea9c8387108c84bc538f43d2e6bb244d9d074b11"} Mar 18 08:51:02.624836 master-0 kubenswrapper[6976]: I0318 08:51:02.624680 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"b75d3625-4131-465d-a8e2-4c42588c7630","Type":"ContainerDied","Data":"f10ab16270a7803054be2d271744f71e45d5e3fab77e472706ee3fb055b353ea"} Mar 18 08:51:02.624910 master-0 kubenswrapper[6976]: I0318 08:51:02.624837 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" event={"ID":"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b","Type":"ContainerDied","Data":"fd295b6b7843cd03ce43cecd7dcd871e030a3bf9af1473694567c5a5799d4c76"} Mar 18 08:51:02.625952 master-0 kubenswrapper[6976]: I0318 08:51:02.625810 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:51:02.625952 master-0 kubenswrapper[6976]: I0318 08:51:02.625839 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:51:02.625952 master-0 kubenswrapper[6976]: I0318 08:51:02.625878 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:51:02.626214 master-0 kubenswrapper[6976]: I0318 08:51:02.625936 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:51:02.626214 master-0 kubenswrapper[6976]: I0318 08:51:02.626044 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:51:02.626214 master-0 kubenswrapper[6976]: I0318 08:51:02.626095 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:51:02.626422 master-0 kubenswrapper[6976]: I0318 08:51:02.626321 6976 scope.go:117] "RemoveContainer" containerID="e7040e73164a56f089f0acc8e8f60bd6ac708b6b6770784a34fbb303688099ef" Mar 18 08:51:02.626489 master-0 kubenswrapper[6976]: I0318 08:51:02.626460 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:51:02.628606 master-0 kubenswrapper[6976]: I0318 08:51:02.627319 6976 scope.go:117] "RemoveContainer" containerID="fd295b6b7843cd03ce43cecd7dcd871e030a3bf9af1473694567c5a5799d4c76" Mar 18 08:51:02.628606 master-0 kubenswrapper[6976]: I0318 08:51:02.628503 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:51:02.630220 master-0 kubenswrapper[6976]: I0318 08:51:02.630141 6976 scope.go:117] "RemoveContainer" containerID="b07a3a34e91709be9071f795c0e0650539cb11f6bc35fb3bec049b4bc3051c6c" Mar 18 08:51:02.631305 master-0 kubenswrapper[6976]: I0318 08:51:02.631253 6976 scope.go:117] "RemoveContainer" containerID="8ff399eba975fe3e4ac2c3d81b3e52845b1835ad72d3a17e7e74d5e7eca9397d" Mar 18 08:51:02.631620 master-0 kubenswrapper[6976]: I0318 08:51:02.631542 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:51:02.635268 master-0 kubenswrapper[6976]: I0318 08:51:02.635202 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:51:02.637823 master-0 kubenswrapper[6976]: I0318 08:51:02.637766 6976 scope.go:117] "RemoveContainer" containerID="cdf9805777db651916bc0fbdb03aeca74e0291990d89a5792cd9c2058bcbad82" Mar 18 08:51:02.638795 master-0 kubenswrapper[6976]: I0318 08:51:02.638488 6976 scope.go:117] "RemoveContainer" containerID="7a5f71287e8b5eb717808046e6ba2bfb7e60eb4819b757b6fc0b37c1ed02f420" Mar 18 08:51:02.643350 master-0 kubenswrapper[6976]: I0318 08:51:02.642336 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:51:02.646182 master-0 kubenswrapper[6976]: I0318 08:51:02.646119 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:51:02.647980 master-0 kubenswrapper[6976]: I0318 08:51:02.647917 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:51:02.648321 master-0 kubenswrapper[6976]: I0318 08:51:02.648020 6976 scope.go:117] "RemoveContainer" containerID="e101758dad1868c5a7ecd290b1cfffd6e710b7c13cfdccb7b41fe00e23534e6d" Mar 18 08:51:02.648633 master-0 kubenswrapper[6976]: I0318 08:51:02.648477 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:51:02.655071 master-0 kubenswrapper[6976]: I0318 08:51:02.654996 6976 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 08:51:03.437559 master-0 kubenswrapper[6976]: I0318 08:51:03.437525 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_3253d87f-ae48-42cf-950f-f508a9b82d0d/installer/0.log" Mar 18 08:51:03.437638 master-0 kubenswrapper[6976]: I0318 08:51:03.437610 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:51:03.440841 master-0 kubenswrapper[6976]: I0318 08:51:03.440802 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_38b830ff-8938-4f21-8977-c29a19c85afb/installer/0.log" Mar 18 08:51:03.440940 master-0 kubenswrapper[6976]: I0318 08:51:03.440916 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:51:03.443790 master-0 kubenswrapper[6976]: I0318 08:51:03.443760 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_b75d3625-4131-465d-a8e2-4c42588c7630/installer/0.log" Mar 18 08:51:03.443845 master-0 kubenswrapper[6976]: I0318 08:51:03.443818 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:51:03.540847 master-0 kubenswrapper[6976]: I0318 08:51:03.540698 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3253d87f-ae48-42cf-950f-f508a9b82d0d-kubelet-dir\") pod \"3253d87f-ae48-42cf-950f-f508a9b82d0d\" (UID: \"3253d87f-ae48-42cf-950f-f508a9b82d0d\") " Mar 18 08:51:03.540847 master-0 kubenswrapper[6976]: I0318 08:51:03.540801 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/38b830ff-8938-4f21-8977-c29a19c85afb-var-lock\") pod \"38b830ff-8938-4f21-8977-c29a19c85afb\" (UID: \"38b830ff-8938-4f21-8977-c29a19c85afb\") " Mar 18 08:51:03.541649 master-0 kubenswrapper[6976]: I0318 08:51:03.540893 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3253d87f-ae48-42cf-950f-f508a9b82d0d-kube-api-access\") pod \"3253d87f-ae48-42cf-950f-f508a9b82d0d\" (UID: \"3253d87f-ae48-42cf-950f-f508a9b82d0d\") " Mar 18 08:51:03.541649 master-0 kubenswrapper[6976]: I0318 08:51:03.540955 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38b830ff-8938-4f21-8977-c29a19c85afb-kube-api-access\") pod \"38b830ff-8938-4f21-8977-c29a19c85afb\" (UID: \"38b830ff-8938-4f21-8977-c29a19c85afb\") " Mar 18 08:51:03.541649 master-0 kubenswrapper[6976]: I0318 08:51:03.541004 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38b830ff-8938-4f21-8977-c29a19c85afb-kubelet-dir\") pod \"38b830ff-8938-4f21-8977-c29a19c85afb\" (UID: \"38b830ff-8938-4f21-8977-c29a19c85afb\") " Mar 18 08:51:03.541649 master-0 kubenswrapper[6976]: I0318 08:51:03.540948 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3253d87f-ae48-42cf-950f-f508a9b82d0d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3253d87f-ae48-42cf-950f-f508a9b82d0d" (UID: "3253d87f-ae48-42cf-950f-f508a9b82d0d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:51:03.541649 master-0 kubenswrapper[6976]: I0318 08:51:03.541062 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b75d3625-4131-465d-a8e2-4c42588c7630-var-lock\") pod \"b75d3625-4131-465d-a8e2-4c42588c7630\" (UID: \"b75d3625-4131-465d-a8e2-4c42588c7630\") " Mar 18 08:51:03.541649 master-0 kubenswrapper[6976]: I0318 08:51:03.541123 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3253d87f-ae48-42cf-950f-f508a9b82d0d-var-lock\") pod \"3253d87f-ae48-42cf-950f-f508a9b82d0d\" (UID: \"3253d87f-ae48-42cf-950f-f508a9b82d0d\") " Mar 18 08:51:03.541649 master-0 kubenswrapper[6976]: I0318 08:51:03.541212 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b75d3625-4131-465d-a8e2-4c42588c7630-kube-api-access\") pod \"b75d3625-4131-465d-a8e2-4c42588c7630\" (UID: \"b75d3625-4131-465d-a8e2-4c42588c7630\") " Mar 18 08:51:03.541649 master-0 kubenswrapper[6976]: I0318 08:51:03.541114 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38b830ff-8938-4f21-8977-c29a19c85afb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "38b830ff-8938-4f21-8977-c29a19c85afb" (UID: "38b830ff-8938-4f21-8977-c29a19c85afb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:51:03.541649 master-0 kubenswrapper[6976]: I0318 08:51:03.541166 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b75d3625-4131-465d-a8e2-4c42588c7630-var-lock" (OuterVolumeSpecName: "var-lock") pod "b75d3625-4131-465d-a8e2-4c42588c7630" (UID: "b75d3625-4131-465d-a8e2-4c42588c7630"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:51:03.541649 master-0 kubenswrapper[6976]: I0318 08:51:03.541266 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3253d87f-ae48-42cf-950f-f508a9b82d0d-var-lock" (OuterVolumeSpecName: "var-lock") pod "3253d87f-ae48-42cf-950f-f508a9b82d0d" (UID: "3253d87f-ae48-42cf-950f-f508a9b82d0d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:51:03.541649 master-0 kubenswrapper[6976]: I0318 08:51:03.541003 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38b830ff-8938-4f21-8977-c29a19c85afb-var-lock" (OuterVolumeSpecName: "var-lock") pod "38b830ff-8938-4f21-8977-c29a19c85afb" (UID: "38b830ff-8938-4f21-8977-c29a19c85afb"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:51:03.541649 master-0 kubenswrapper[6976]: I0318 08:51:03.541378 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b75d3625-4131-465d-a8e2-4c42588c7630-kubelet-dir\") pod \"b75d3625-4131-465d-a8e2-4c42588c7630\" (UID: \"b75d3625-4131-465d-a8e2-4c42588c7630\") " Mar 18 08:51:03.541649 master-0 kubenswrapper[6976]: I0318 08:51:03.541498 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b75d3625-4131-465d-a8e2-4c42588c7630-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b75d3625-4131-465d-a8e2-4c42588c7630" (UID: "b75d3625-4131-465d-a8e2-4c42588c7630"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:51:03.542365 master-0 kubenswrapper[6976]: I0318 08:51:03.541772 6976 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b75d3625-4131-465d-a8e2-4c42588c7630-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:51:03.542365 master-0 kubenswrapper[6976]: I0318 08:51:03.541803 6976 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3253d87f-ae48-42cf-950f-f508a9b82d0d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:51:03.542365 master-0 kubenswrapper[6976]: I0318 08:51:03.541823 6976 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/38b830ff-8938-4f21-8977-c29a19c85afb-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 08:51:03.542365 master-0 kubenswrapper[6976]: I0318 08:51:03.541840 6976 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/38b830ff-8938-4f21-8977-c29a19c85afb-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:51:03.542365 master-0 kubenswrapper[6976]: I0318 08:51:03.541858 6976 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b75d3625-4131-465d-a8e2-4c42588c7630-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 08:51:03.542365 master-0 kubenswrapper[6976]: I0318 08:51:03.541874 6976 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3253d87f-ae48-42cf-950f-f508a9b82d0d-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 08:51:03.544179 master-0 kubenswrapper[6976]: I0318 08:51:03.544125 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b75d3625-4131-465d-a8e2-4c42588c7630-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b75d3625-4131-465d-a8e2-4c42588c7630" (UID: "b75d3625-4131-465d-a8e2-4c42588c7630"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:51:03.544669 master-0 kubenswrapper[6976]: I0318 08:51:03.544619 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38b830ff-8938-4f21-8977-c29a19c85afb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "38b830ff-8938-4f21-8977-c29a19c85afb" (UID: "38b830ff-8938-4f21-8977-c29a19c85afb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:51:03.547466 master-0 kubenswrapper[6976]: I0318 08:51:03.547396 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3253d87f-ae48-42cf-950f-f508a9b82d0d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3253d87f-ae48-42cf-950f-f508a9b82d0d" (UID: "3253d87f-ae48-42cf-950f-f508a9b82d0d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:51:03.642764 master-0 kubenswrapper[6976]: I0318 08:51:03.642665 6976 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3253d87f-ae48-42cf-950f-f508a9b82d0d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 08:51:03.642764 master-0 kubenswrapper[6976]: I0318 08:51:03.642729 6976 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/38b830ff-8938-4f21-8977-c29a19c85afb-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 08:51:03.642764 master-0 kubenswrapper[6976]: I0318 08:51:03.642748 6976 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b75d3625-4131-465d-a8e2-4c42588c7630-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 08:51:03.952384 master-0 kubenswrapper[6976]: I0318 08:51:03.952273 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-6rtpx_8b779ce3-07c4-45ca-b1ca-750c95ed3d0b/network-operator/0.log" Mar 18 08:51:03.954684 master-0 kubenswrapper[6976]: I0318 08:51:03.954659 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-lf7kq_57affd8b-d1ce-40d2-b31e-7b18645ca7b6/approver/0.log" Mar 18 08:51:03.957277 master-0 kubenswrapper[6976]: I0318 08:51:03.957242 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_3253d87f-ae48-42cf-950f-f508a9b82d0d/installer/0.log" Mar 18 08:51:03.957406 master-0 kubenswrapper[6976]: I0318 08:51:03.957382 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 08:51:03.960675 master-0 kubenswrapper[6976]: I0318 08:51:03.960642 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_38b830ff-8938-4f21-8977-c29a19c85afb/installer/0.log" Mar 18 08:51:03.960819 master-0 kubenswrapper[6976]: I0318 08:51:03.960793 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 08:51:03.963621 master-0 kubenswrapper[6976]: I0318 08:51:03.962768 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_b75d3625-4131-465d-a8e2-4c42588c7630/installer/0.log" Mar 18 08:51:03.963621 master-0 kubenswrapper[6976]: I0318 08:51:03.962881 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 08:51:04.046888 master-0 kubenswrapper[6976]: E0318 08:51:04.046760 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 18 08:51:04.551407 master-0 kubenswrapper[6976]: E0318 08:51:04.551190 6976 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189de3536656d288 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:24b4ed170d527099878cb5fdd508a2fb,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:49:35.5926002 +0000 UTC m=+75.178201795,LastTimestamp:2026-03-18 08:49:35.5926002 +0000 UTC m=+75.178201795,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:51:06.993981 master-0 kubenswrapper[6976]: I0318 08:51:06.993926 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-2g6x9_0f6a7f55-84bd-4ea5-8248-4cb565904c3b/openshift-controller-manager-operator/0.log" Mar 18 08:51:06.994499 master-0 kubenswrapper[6976]: I0318 08:51:06.993993 6976 generic.go:334] "Generic (PLEG): container finished" podID="0f6a7f55-84bd-4ea5-8248-4cb565904c3b" containerID="66cbf701fabf0e0f193e14614de147bfd5b674f1f5978178edd97cd8b89c12a4" exitCode=1 Mar 18 08:51:12.471909 master-0 kubenswrapper[6976]: E0318 08:51:12.471817 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Mar 18 08:51:15.648802 master-0 kubenswrapper[6976]: E0318 08:51:15.648692 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 18 08:51:22.472681 master-0 kubenswrapper[6976]: E0318 08:51:22.472628 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 08:51:22.473652 master-0 kubenswrapper[6976]: E0318 08:51:22.473621 6976 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 08:51:22.603784 master-0 kubenswrapper[6976]: I0318 08:51:22.603666 6976 status_manager.go:851] "Failed to get status for pod" podUID="46f265536aba6292ead501bc9b49f327" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods bootstrap-kube-controller-manager-master-0)" Mar 18 08:51:24.707231 master-0 kubenswrapper[6976]: I0318 08:51:24.707165 6976 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-xfqsm container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.35:8081/healthz\": dial tcp 10.128.0.35:8081: connect: connection refused" start-of-body= Mar 18 08:51:24.708082 master-0 kubenswrapper[6976]: I0318 08:51:24.707275 6976 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" podUID="800297fe-77fd-4f58-ade2-32a147cd7d5c" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.35:8081/healthz\": dial tcp 10.128.0.35:8081: connect: connection refused" Mar 18 08:51:24.708082 master-0 kubenswrapper[6976]: I0318 08:51:24.707190 6976 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-xfqsm container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.35:8081/readyz\": dial tcp 10.128.0.35:8081: connect: connection refused" start-of-body= Mar 18 08:51:24.708082 master-0 kubenswrapper[6976]: I0318 08:51:24.707883 6976 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" podUID="800297fe-77fd-4f58-ade2-32a147cd7d5c" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.35:8081/readyz\": dial tcp 10.128.0.35:8081: connect: connection refused" Mar 18 08:51:25.096012 master-0 kubenswrapper[6976]: I0318 08:51:25.095836 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-xfqsm_800297fe-77fd-4f58-ade2-32a147cd7d5c/manager/0.log" Mar 18 08:51:25.096012 master-0 kubenswrapper[6976]: I0318 08:51:25.095888 6976 generic.go:334] "Generic (PLEG): container finished" podID="800297fe-77fd-4f58-ade2-32a147cd7d5c" containerID="bc52f72875ab784115d2ae7cf81aabfc20eff1b537ca6458d743902aaf4541e4" exitCode=1 Mar 18 08:51:27.111867 master-0 kubenswrapper[6976]: I0318 08:51:27.111765 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-vbxdw_411d544f-e105-44f0-927a-f61406b3f070/manager/0.log" Mar 18 08:51:27.112773 master-0 kubenswrapper[6976]: I0318 08:51:27.112423 6976 generic.go:334] "Generic (PLEG): container finished" podID="411d544f-e105-44f0-927a-f61406b3f070" containerID="177f16090fa41cba4e3892f17219367dee40fa3695daf9c589750f25c0f6d328" exitCode=1 Mar 18 08:51:27.115112 master-0 kubenswrapper[6976]: I0318 08:51:27.115050 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qnc62_4e919445-81d0-4663-8941-f596d8121305/snapshot-controller/0.log" Mar 18 08:51:27.115274 master-0 kubenswrapper[6976]: I0318 08:51:27.115121 6976 generic.go:334] "Generic (PLEG): container finished" podID="4e919445-81d0-4663-8941-f596d8121305" containerID="b7023722fb31c9ade901bb4f5f5537f159e85f319ef882c910c37283dbf679ec" exitCode=1 Mar 18 08:51:28.851615 master-0 kubenswrapper[6976]: E0318 08:51:28.850612 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 18 08:51:30.133753 master-0 kubenswrapper[6976]: I0318 08:51:30.133541 6976 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="fc3bba74c1c5dfc4469c628e1ccd99032fb59aaf6362379db3f1337bbf0219a6" exitCode=1 Mar 18 08:51:34.161409 master-0 kubenswrapper[6976]: I0318 08:51:34.161343 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-4cxfh_bf7a3329-a04c-4b58-9364-b907c00cbe08/ingress-operator/0.log" Mar 18 08:51:34.162294 master-0 kubenswrapper[6976]: I0318 08:51:34.161412 6976 generic.go:334] "Generic (PLEG): container finished" podID="bf7a3329-a04c-4b58-9364-b907c00cbe08" containerID="9d25c9c9b5ced91c32a1b9dd7e48ce6b3235062e8dd7fa065d776452831b8b1b" exitCode=1 Mar 18 08:51:34.707824 master-0 kubenswrapper[6976]: I0318 08:51:34.707730 6976 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-xfqsm container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.35:8081/readyz\": dial tcp 10.128.0.35:8081: connect: connection refused" start-of-body= Mar 18 08:51:34.708110 master-0 kubenswrapper[6976]: I0318 08:51:34.707833 6976 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" podUID="800297fe-77fd-4f58-ade2-32a147cd7d5c" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.35:8081/readyz\": dial tcp 10.128.0.35:8081: connect: connection refused" Mar 18 08:51:36.123134 master-0 kubenswrapper[6976]: I0318 08:51:36.123003 6976 patch_prober.go:28] interesting pod/catalogd-controller-manager-6864dc98f7-vbxdw container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.34:8081/readyz\": dial tcp 10.128.0.34:8081: connect: connection refused" start-of-body= Mar 18 08:51:36.123134 master-0 kubenswrapper[6976]: I0318 08:51:36.123090 6976 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" podUID="411d544f-e105-44f0-927a-f61406b3f070" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.34:8081/readyz\": dial tcp 10.128.0.34:8081: connect: connection refused" Mar 18 08:51:36.207804 master-0 kubenswrapper[6976]: I0318 08:51:36.207670 6976 patch_prober.go:28] interesting pod/etcd-operator-8544cbcf9c-f2nfl container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Mar 18 08:51:36.207804 master-0 kubenswrapper[6976]: I0318 08:51:36.207742 6976 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" podUID="bb6ef4c4-bff3-4559-8e42-582bbd668b7c" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Mar 18 08:51:36.658899 master-0 kubenswrapper[6976]: E0318 08:51:36.658815 6976 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:51:36.659163 master-0 kubenswrapper[6976]: E0318 08:51:36.658968 6976 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.034s" Mar 18 08:51:36.659163 master-0 kubenswrapper[6976]: I0318 08:51:36.658989 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"512a999778aeba262c615ce98f4b7e30d2e5304b6c496908178b7d3a73d7fb2e"} Mar 18 08:51:36.665298 master-0 kubenswrapper[6976]: I0318 08:51:36.665242 6976 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 08:51:38.188028 master-0 kubenswrapper[6976]: I0318 08:51:38.187944 6976 generic.go:334] "Generic (PLEG): container finished" podID="b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd" containerID="35bec5aad4d31f588044876420b3abf5aa56e6a349124b911e43ef3a01a96e33" exitCode=0 Mar 18 08:51:38.192138 master-0 kubenswrapper[6976]: I0318 08:51:38.192074 6976 generic.go:334] "Generic (PLEG): container finished" podID="5f827195-f68d-4bd2-865b-a1f041a5c73e" containerID="94a4ad92cd3b53ae4641e35e7fd4ec8fccd8630c21c0fc3c12a574e02645e3da" exitCode=0 Mar 18 08:51:38.554319 master-0 kubenswrapper[6976]: E0318 08:51:38.554153 6976 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de3536ae503c5 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:49:35.669027781 +0000 UTC m=+75.254629376,LastTimestamp:2026-03-18 08:49:35.669027781 +0000 UTC m=+75.254629376,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:51:42.682231 master-0 kubenswrapper[6976]: E0318 08:51:42.681962 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:51:32Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:51:32Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:51:32Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:51:32Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\\\"],\\\"sizeBytes\\\":1637455533},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36\\\"],\\\"sizeBytes\\\":1238100502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015\\\"],\\\"sizeBytes\\\":991832673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\"],\\\"sizeBytes\\\":943841779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa\\\"],\\\"sizeBytes\\\":876160834},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a9e8da5c6114f062b814936d4db7a47a04d248e160d6bb28ad4e4a081496ee4\\\"],\\\"sizeBytes\\\":772943435},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016\\\"],\\\"sizeBytes\\\":687949580},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d\\\"],\\\"sizeBytes\\\":683195416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a\\\"],\\\"sizeBytes\\\":677942383},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4\\\"],\\\"sizeBytes\\\":621648710},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998\\\"],\\\"sizeBytes\\\":589386806},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55\\\"],\\\"sizeBytes\\\":582154903},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982\\\"],\\\"sizeBytes\\\":558211175},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89\\\"],\\\"sizeBytes\\\":548752816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\"],\\\"sizeBytes\\\":529326739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946\\\"],\\\"sizeBytes\\\":528956487},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"],\\\"sizeBytes\\\":518384969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1\\\"],\\\"sizeBytes\\\":517999161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\"],\\\"sizeBytes\\\":514984269},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427\\\"],\\\"sizeBytes\\\":513221333},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"],\\\"sizeBytes\\\":512274055},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098\\\"],\\\"sizeBytes\\\":511227324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11\\\"],\\\"sizeBytes\\\":511164375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458\\\"],\\\"sizeBytes\\\":508888171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"],\\\"sizeBytes\\\":508544745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71\\\"],\\\"sizeBytes\\\":507972093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3\\\"],\\\"sizeBytes\\\":506480167},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302\\\"],\\\"sizeBytes\\\":506395599},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634\\\"],\\\"sizeBytes\\\":505345991},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\\\"],\\\"sizeBytes\\\":505246690},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252\\\"],\\\"sizeBytes\\\":504625081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69\\\"],\\\"sizeBytes\\\":495994673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023\\\"],\\\"sizeBytes\\\":495065340},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b\\\"],\\\"sizeBytes\\\":487096305},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c4d5a681595e428ff4b5083648c13615eed80be9084a3d3fc68a0295079cb12\\\"],\\\"sizeBytes\\\":484187929},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc\\\"],\\\"sizeBytes\\\":468265024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\\\"],\\\"sizeBytes\\\":465090934},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24\\\"],\\\"sizeBytes\\\":463705930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85\\\"],\\\"sizeBytes\\\":448042136},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951ecfeba9b2da4b653034d09275f925396a79c2d8461b8a7c71c776fee67ba0\\\"],\\\"sizeBytes\\\":443272037},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483\\\"],\\\"sizeBytes\\\":438654374},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e66fd50be6f83ce321a566dfb76f3725b597374077d5af13813b928f6b1267e\\\"],\\\"sizeBytes\\\":411587146},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3a494212f1ba17f0f0980eef583218330eccb56eadf6b8cb0548c76d99b5014\\\"],\\\"sizeBytes\\\":407347125},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422\\\"],\\\"sizeBytes\\\":396521761}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:51:44.706539 master-0 kubenswrapper[6976]: I0318 08:51:44.706479 6976 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-xfqsm container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.35:8081/readyz\": dial tcp 10.128.0.35:8081: connect: connection refused" start-of-body= Mar 18 08:51:44.707115 master-0 kubenswrapper[6976]: I0318 08:51:44.706536 6976 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" podUID="800297fe-77fd-4f58-ade2-32a147cd7d5c" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.35:8081/readyz\": dial tcp 10.128.0.35:8081: connect: connection refused" Mar 18 08:51:44.707391 master-0 kubenswrapper[6976]: I0318 08:51:44.707355 6976 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-xfqsm container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.35:8081/healthz\": dial tcp 10.128.0.35:8081: connect: connection refused" start-of-body= Mar 18 08:51:44.707545 master-0 kubenswrapper[6976]: I0318 08:51:44.707509 6976 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" podUID="800297fe-77fd-4f58-ade2-32a147cd7d5c" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.35:8081/healthz\": dial tcp 10.128.0.35:8081: connect: connection refused" Mar 18 08:51:45.252215 master-0 kubenswrapper[6976]: E0318 08:51:45.251923 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 08:51:46.122815 master-0 kubenswrapper[6976]: I0318 08:51:46.122707 6976 patch_prober.go:28] interesting pod/catalogd-controller-manager-6864dc98f7-vbxdw container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.34:8081/healthz\": dial tcp 10.128.0.34:8081: connect: connection refused" start-of-body= Mar 18 08:51:46.122815 master-0 kubenswrapper[6976]: I0318 08:51:46.122751 6976 patch_prober.go:28] interesting pod/catalogd-controller-manager-6864dc98f7-vbxdw container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.34:8081/readyz\": dial tcp 10.128.0.34:8081: connect: connection refused" start-of-body= Mar 18 08:51:46.123938 master-0 kubenswrapper[6976]: I0318 08:51:46.122821 6976 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" podUID="411d544f-e105-44f0-927a-f61406b3f070" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.34:8081/healthz\": dial tcp 10.128.0.34:8081: connect: connection refused" Mar 18 08:51:46.123938 master-0 kubenswrapper[6976]: I0318 08:51:46.122829 6976 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" podUID="411d544f-e105-44f0-927a-f61406b3f070" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.34:8081/readyz\": dial tcp 10.128.0.34:8081: connect: connection refused" Mar 18 08:51:49.667974 master-0 kubenswrapper[6976]: E0318 08:51:49.667915 6976 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 08:51:52.683137 master-0 kubenswrapper[6976]: E0318 08:51:52.683071 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:51:54.707261 master-0 kubenswrapper[6976]: I0318 08:51:54.707054 6976 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-xfqsm container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.35:8081/readyz\": dial tcp 10.128.0.35:8081: connect: connection refused" start-of-body= Mar 18 08:51:54.707261 master-0 kubenswrapper[6976]: I0318 08:51:54.707161 6976 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" podUID="800297fe-77fd-4f58-ade2-32a147cd7d5c" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.35:8081/readyz\": dial tcp 10.128.0.35:8081: connect: connection refused" Mar 18 08:51:56.123018 master-0 kubenswrapper[6976]: I0318 08:51:56.122935 6976 patch_prober.go:28] interesting pod/catalogd-controller-manager-6864dc98f7-vbxdw container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.34:8081/readyz\": dial tcp 10.128.0.34:8081: connect: connection refused" start-of-body= Mar 18 08:51:56.123473 master-0 kubenswrapper[6976]: I0318 08:51:56.123034 6976 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" podUID="411d544f-e105-44f0-927a-f61406b3f070" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.34:8081/readyz\": dial tcp 10.128.0.34:8081: connect: connection refused" Mar 18 08:52:00.334211 master-0 kubenswrapper[6976]: I0318 08:52:00.334152 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-j75sc_e86268c9-7a83-4ccb-979a-feff00cb4b3e/authentication-operator/1.log" Mar 18 08:52:00.335264 master-0 kubenswrapper[6976]: I0318 08:52:00.334589 6976 generic.go:334] "Generic (PLEG): container finished" podID="e86268c9-7a83-4ccb-979a-feff00cb4b3e" containerID="9c9d46ecc19961b32a9a632092c439cef6feaecffc62b43586ab2e3093d0896c" exitCode=255 Mar 18 08:52:02.253126 master-0 kubenswrapper[6976]: E0318 08:52:02.253004 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 08:52:02.683785 master-0 kubenswrapper[6976]: E0318 08:52:02.683558 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:52:03.370711 master-0 kubenswrapper[6976]: E0318 08:52:03.369927 6976 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 08:52:03.370711 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-2xs9n_openshift-multus_e48101ca-f356-45e3-93d7-4e17b8d8066c_0(9cc1770c1d426617ed65710e698bf6710dcf4998a99a0f4d8e13cb520ed2c4c8): error adding pod openshift-multus_network-metrics-daemon-2xs9n to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9cc1770c1d426617ed65710e698bf6710dcf4998a99a0f4d8e13cb520ed2c4c8" Netns:"/var/run/netns/d3010362-34af-4949-833c-22b9ca03204e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-2xs9n;K8S_POD_INFRA_CONTAINER_ID=9cc1770c1d426617ed65710e698bf6710dcf4998a99a0f4d8e13cb520ed2c4c8;K8S_POD_UID=e48101ca-f356-45e3-93d7-4e17b8d8066c" Path:"" ERRORED: error configuring pod [openshift-multus/network-metrics-daemon-2xs9n] networking: Multus: [openshift-multus/network-metrics-daemon-2xs9n/e48101ca-f356-45e3-93d7-4e17b8d8066c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-metrics-daemon-2xs9n in out of cluster comm: SetNetworkStatus: failed to update the pod network-metrics-daemon-2xs9n in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-2xs9n?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:52:03.370711 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:52:03.370711 master-0 kubenswrapper[6976]: > Mar 18 08:52:03.370711 master-0 kubenswrapper[6976]: E0318 08:52:03.370005 6976 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 08:52:03.370711 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-2xs9n_openshift-multus_e48101ca-f356-45e3-93d7-4e17b8d8066c_0(9cc1770c1d426617ed65710e698bf6710dcf4998a99a0f4d8e13cb520ed2c4c8): error adding pod openshift-multus_network-metrics-daemon-2xs9n to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9cc1770c1d426617ed65710e698bf6710dcf4998a99a0f4d8e13cb520ed2c4c8" Netns:"/var/run/netns/d3010362-34af-4949-833c-22b9ca03204e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-2xs9n;K8S_POD_INFRA_CONTAINER_ID=9cc1770c1d426617ed65710e698bf6710dcf4998a99a0f4d8e13cb520ed2c4c8;K8S_POD_UID=e48101ca-f356-45e3-93d7-4e17b8d8066c" Path:"" ERRORED: error configuring pod [openshift-multus/network-metrics-daemon-2xs9n] networking: Multus: [openshift-multus/network-metrics-daemon-2xs9n/e48101ca-f356-45e3-93d7-4e17b8d8066c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-metrics-daemon-2xs9n in out of cluster comm: SetNetworkStatus: failed to update the pod network-metrics-daemon-2xs9n in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-2xs9n?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:52:03.370711 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:52:03.370711 master-0 kubenswrapper[6976]: > pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:52:03.370711 master-0 kubenswrapper[6976]: E0318 08:52:03.370037 6976 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 08:52:03.370711 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-2xs9n_openshift-multus_e48101ca-f356-45e3-93d7-4e17b8d8066c_0(9cc1770c1d426617ed65710e698bf6710dcf4998a99a0f4d8e13cb520ed2c4c8): error adding pod openshift-multus_network-metrics-daemon-2xs9n to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9cc1770c1d426617ed65710e698bf6710dcf4998a99a0f4d8e13cb520ed2c4c8" Netns:"/var/run/netns/d3010362-34af-4949-833c-22b9ca03204e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-2xs9n;K8S_POD_INFRA_CONTAINER_ID=9cc1770c1d426617ed65710e698bf6710dcf4998a99a0f4d8e13cb520ed2c4c8;K8S_POD_UID=e48101ca-f356-45e3-93d7-4e17b8d8066c" Path:"" ERRORED: error configuring pod [openshift-multus/network-metrics-daemon-2xs9n] networking: Multus: [openshift-multus/network-metrics-daemon-2xs9n/e48101ca-f356-45e3-93d7-4e17b8d8066c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-metrics-daemon-2xs9n in out of cluster comm: SetNetworkStatus: failed to update the pod network-metrics-daemon-2xs9n in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-2xs9n?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:52:03.370711 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:52:03.370711 master-0 kubenswrapper[6976]: > pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:52:03.370711 master-0 kubenswrapper[6976]: E0318 08:52:03.370111 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-metrics-daemon-2xs9n_openshift-multus(e48101ca-f356-45e3-93d7-4e17b8d8066c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-metrics-daemon-2xs9n_openshift-multus(e48101ca-f356-45e3-93d7-4e17b8d8066c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-2xs9n_openshift-multus_e48101ca-f356-45e3-93d7-4e17b8d8066c_0(9cc1770c1d426617ed65710e698bf6710dcf4998a99a0f4d8e13cb520ed2c4c8): error adding pod openshift-multus_network-metrics-daemon-2xs9n to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"9cc1770c1d426617ed65710e698bf6710dcf4998a99a0f4d8e13cb520ed2c4c8\\\" Netns:\\\"/var/run/netns/d3010362-34af-4949-833c-22b9ca03204e\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=network-metrics-daemon-2xs9n;K8S_POD_INFRA_CONTAINER_ID=9cc1770c1d426617ed65710e698bf6710dcf4998a99a0f4d8e13cb520ed2c4c8;K8S_POD_UID=e48101ca-f356-45e3-93d7-4e17b8d8066c\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-multus/network-metrics-daemon-2xs9n] networking: Multus: [openshift-multus/network-metrics-daemon-2xs9n/e48101ca-f356-45e3-93d7-4e17b8d8066c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod network-metrics-daemon-2xs9n in out of cluster comm: SetNetworkStatus: failed to update the pod network-metrics-daemon-2xs9n in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-2xs9n?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-multus/network-metrics-daemon-2xs9n" podUID="e48101ca-f356-45e3-93d7-4e17b8d8066c" Mar 18 08:52:03.475193 master-0 kubenswrapper[6976]: E0318 08:52:03.474946 6976 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 08:52:03.475193 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-68f85b4d6c-hhn7l_openshift-operator-lifecycle-manager_f6833a48-fccb-42bd-ac90-29f08d5bf7e8_0(7d058f974d59a0c12921293b056973003ebbb92d4e1a0155851c36db9c3ef903): error adding pod openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-hhn7l to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7d058f974d59a0c12921293b056973003ebbb92d4e1a0155851c36db9c3ef903" Netns:"/var/run/netns/a6575dea-1f55-4b5f-b561-ed822263c896" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-68f85b4d6c-hhn7l;K8S_POD_INFRA_CONTAINER_ID=7d058f974d59a0c12921293b056973003ebbb92d4e1a0155851c36db9c3ef903;K8S_POD_UID=f6833a48-fccb-42bd-ac90-29f08d5bf7e8" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l/f6833a48-fccb-42bd-ac90-29f08d5bf7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-68f85b4d6c-hhn7l in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-68f85b4d6c-hhn7l in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68f85b4d6c-hhn7l?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:52:03.475193 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:52:03.475193 master-0 kubenswrapper[6976]: > Mar 18 08:52:03.475193 master-0 kubenswrapper[6976]: E0318 08:52:03.475020 6976 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 08:52:03.475193 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-68f85b4d6c-hhn7l_openshift-operator-lifecycle-manager_f6833a48-fccb-42bd-ac90-29f08d5bf7e8_0(7d058f974d59a0c12921293b056973003ebbb92d4e1a0155851c36db9c3ef903): error adding pod openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-hhn7l to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7d058f974d59a0c12921293b056973003ebbb92d4e1a0155851c36db9c3ef903" Netns:"/var/run/netns/a6575dea-1f55-4b5f-b561-ed822263c896" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-68f85b4d6c-hhn7l;K8S_POD_INFRA_CONTAINER_ID=7d058f974d59a0c12921293b056973003ebbb92d4e1a0155851c36db9c3ef903;K8S_POD_UID=f6833a48-fccb-42bd-ac90-29f08d5bf7e8" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l/f6833a48-fccb-42bd-ac90-29f08d5bf7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-68f85b4d6c-hhn7l in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-68f85b4d6c-hhn7l in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68f85b4d6c-hhn7l?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:52:03.475193 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:52:03.475193 master-0 kubenswrapper[6976]: > pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:52:03.475193 master-0 kubenswrapper[6976]: E0318 08:52:03.475046 6976 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 08:52:03.475193 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-68f85b4d6c-hhn7l_openshift-operator-lifecycle-manager_f6833a48-fccb-42bd-ac90-29f08d5bf7e8_0(7d058f974d59a0c12921293b056973003ebbb92d4e1a0155851c36db9c3ef903): error adding pod openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-hhn7l to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7d058f974d59a0c12921293b056973003ebbb92d4e1a0155851c36db9c3ef903" Netns:"/var/run/netns/a6575dea-1f55-4b5f-b561-ed822263c896" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-68f85b4d6c-hhn7l;K8S_POD_INFRA_CONTAINER_ID=7d058f974d59a0c12921293b056973003ebbb92d4e1a0155851c36db9c3ef903;K8S_POD_UID=f6833a48-fccb-42bd-ac90-29f08d5bf7e8" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l/f6833a48-fccb-42bd-ac90-29f08d5bf7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-68f85b4d6c-hhn7l in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-68f85b4d6c-hhn7l in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68f85b4d6c-hhn7l?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:52:03.475193 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:52:03.475193 master-0 kubenswrapper[6976]: > pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:52:03.475193 master-0 kubenswrapper[6976]: E0318 08:52:03.475121 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"catalog-operator-68f85b4d6c-hhn7l_openshift-operator-lifecycle-manager(f6833a48-fccb-42bd-ac90-29f08d5bf7e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"catalog-operator-68f85b4d6c-hhn7l_openshift-operator-lifecycle-manager(f6833a48-fccb-42bd-ac90-29f08d5bf7e8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_catalog-operator-68f85b4d6c-hhn7l_openshift-operator-lifecycle-manager_f6833a48-fccb-42bd-ac90-29f08d5bf7e8_0(7d058f974d59a0c12921293b056973003ebbb92d4e1a0155851c36db9c3ef903): error adding pod openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-hhn7l to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"7d058f974d59a0c12921293b056973003ebbb92d4e1a0155851c36db9c3ef903\\\" Netns:\\\"/var/run/netns/a6575dea-1f55-4b5f-b561-ed822263c896\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=catalog-operator-68f85b4d6c-hhn7l;K8S_POD_INFRA_CONTAINER_ID=7d058f974d59a0c12921293b056973003ebbb92d4e1a0155851c36db9c3ef903;K8S_POD_UID=f6833a48-fccb-42bd-ac90-29f08d5bf7e8\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l] networking: Multus: [openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l/f6833a48-fccb-42bd-ac90-29f08d5bf7e8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod catalog-operator-68f85b4d6c-hhn7l in out of cluster comm: SetNetworkStatus: failed to update the pod catalog-operator-68f85b4d6c-hhn7l in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68f85b4d6c-hhn7l?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" podUID="f6833a48-fccb-42bd-ac90-29f08d5bf7e8" Mar 18 08:52:03.509957 master-0 kubenswrapper[6976]: E0318 08:52:03.509869 6976 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 08:52:03.509957 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_olm-operator-5c9796789-twp27_openshift-operator-lifecycle-manager_c00ee838-424f-482b-942f-08f0952a5ccd_0(e32fd629f072539b833e7488caadb85e0670b967c5e5de3bf17b9fb04f5587de): error adding pod openshift-operator-lifecycle-manager_olm-operator-5c9796789-twp27 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e32fd629f072539b833e7488caadb85e0670b967c5e5de3bf17b9fb04f5587de" Netns:"/var/run/netns/69d22f8e-fac3-4b11-984b-c5cde1226a42" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-5c9796789-twp27;K8S_POD_INFRA_CONTAINER_ID=e32fd629f072539b833e7488caadb85e0670b967c5e5de3bf17b9fb04f5587de;K8S_POD_UID=c00ee838-424f-482b-942f-08f0952a5ccd" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27] networking: Multus: [openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27/c00ee838-424f-482b-942f-08f0952a5ccd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod olm-operator-5c9796789-twp27 in out of cluster comm: SetNetworkStatus: failed to update the pod olm-operator-5c9796789-twp27 in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods olm-operator-5c9796789-twp27) Mar 18 08:52:03.509957 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:52:03.509957 master-0 kubenswrapper[6976]: > Mar 18 08:52:03.510161 master-0 kubenswrapper[6976]: E0318 08:52:03.509982 6976 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 08:52:03.510161 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_olm-operator-5c9796789-twp27_openshift-operator-lifecycle-manager_c00ee838-424f-482b-942f-08f0952a5ccd_0(e32fd629f072539b833e7488caadb85e0670b967c5e5de3bf17b9fb04f5587de): error adding pod openshift-operator-lifecycle-manager_olm-operator-5c9796789-twp27 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e32fd629f072539b833e7488caadb85e0670b967c5e5de3bf17b9fb04f5587de" Netns:"/var/run/netns/69d22f8e-fac3-4b11-984b-c5cde1226a42" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-5c9796789-twp27;K8S_POD_INFRA_CONTAINER_ID=e32fd629f072539b833e7488caadb85e0670b967c5e5de3bf17b9fb04f5587de;K8S_POD_UID=c00ee838-424f-482b-942f-08f0952a5ccd" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27] networking: Multus: [openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27/c00ee838-424f-482b-942f-08f0952a5ccd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod olm-operator-5c9796789-twp27 in out of cluster comm: SetNetworkStatus: failed to update the pod olm-operator-5c9796789-twp27 in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods olm-operator-5c9796789-twp27) Mar 18 08:52:03.510161 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:52:03.510161 master-0 kubenswrapper[6976]: > pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:52:03.510161 master-0 kubenswrapper[6976]: E0318 08:52:03.510003 6976 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 08:52:03.510161 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_olm-operator-5c9796789-twp27_openshift-operator-lifecycle-manager_c00ee838-424f-482b-942f-08f0952a5ccd_0(e32fd629f072539b833e7488caadb85e0670b967c5e5de3bf17b9fb04f5587de): error adding pod openshift-operator-lifecycle-manager_olm-operator-5c9796789-twp27 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e32fd629f072539b833e7488caadb85e0670b967c5e5de3bf17b9fb04f5587de" Netns:"/var/run/netns/69d22f8e-fac3-4b11-984b-c5cde1226a42" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-5c9796789-twp27;K8S_POD_INFRA_CONTAINER_ID=e32fd629f072539b833e7488caadb85e0670b967c5e5de3bf17b9fb04f5587de;K8S_POD_UID=c00ee838-424f-482b-942f-08f0952a5ccd" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27] networking: Multus: [openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27/c00ee838-424f-482b-942f-08f0952a5ccd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod olm-operator-5c9796789-twp27 in out of cluster comm: SetNetworkStatus: failed to update the pod olm-operator-5c9796789-twp27 in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods olm-operator-5c9796789-twp27) Mar 18 08:52:03.510161 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:52:03.510161 master-0 kubenswrapper[6976]: > pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:52:03.510161 master-0 kubenswrapper[6976]: E0318 08:52:03.510068 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"olm-operator-5c9796789-twp27_openshift-operator-lifecycle-manager(c00ee838-424f-482b-942f-08f0952a5ccd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"olm-operator-5c9796789-twp27_openshift-operator-lifecycle-manager(c00ee838-424f-482b-942f-08f0952a5ccd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_olm-operator-5c9796789-twp27_openshift-operator-lifecycle-manager_c00ee838-424f-482b-942f-08f0952a5ccd_0(e32fd629f072539b833e7488caadb85e0670b967c5e5de3bf17b9fb04f5587de): error adding pod openshift-operator-lifecycle-manager_olm-operator-5c9796789-twp27 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"e32fd629f072539b833e7488caadb85e0670b967c5e5de3bf17b9fb04f5587de\\\" Netns:\\\"/var/run/netns/69d22f8e-fac3-4b11-984b-c5cde1226a42\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=olm-operator-5c9796789-twp27;K8S_POD_INFRA_CONTAINER_ID=e32fd629f072539b833e7488caadb85e0670b967c5e5de3bf17b9fb04f5587de;K8S_POD_UID=c00ee838-424f-482b-942f-08f0952a5ccd\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27] networking: Multus: [openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27/c00ee838-424f-482b-942f-08f0952a5ccd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod olm-operator-5c9796789-twp27 in out of cluster comm: SetNetworkStatus: failed to update the pod olm-operator-5c9796789-twp27 in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods olm-operator-5c9796789-twp27)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" podUID="c00ee838-424f-482b-942f-08f0952a5ccd" Mar 18 08:52:03.818758 master-0 kubenswrapper[6976]: E0318 08:52:03.818579 6976 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 08:52:03.818758 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-monitoring-operator-58845fbb57-8vfjr_openshift-monitoring_09269324-c908-474d-818f-5cd49406f1e2_0(e7a4bc8b8140515d85a93fdd50c1376ee00e33b83350e67d02a91e6c70a98465): error adding pod openshift-monitoring_cluster-monitoring-operator-58845fbb57-8vfjr to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e7a4bc8b8140515d85a93fdd50c1376ee00e33b83350e67d02a91e6c70a98465" Netns:"/var/run/netns/6d031b9b-7c8f-468c-97dc-87cfe5ddfb7a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=cluster-monitoring-operator-58845fbb57-8vfjr;K8S_POD_INFRA_CONTAINER_ID=e7a4bc8b8140515d85a93fdd50c1376ee00e33b83350e67d02a91e6c70a98465;K8S_POD_UID=09269324-c908-474d-818f-5cd49406f1e2" Path:"" ERRORED: error configuring pod [openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr] networking: Multus: [openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr/09269324-c908-474d-818f-5cd49406f1e2]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-monitoring-operator-58845fbb57-8vfjr in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-monitoring-operator-58845fbb57-8vfjr in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods cluster-monitoring-operator-58845fbb57-8vfjr) Mar 18 08:52:03.818758 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:52:03.818758 master-0 kubenswrapper[6976]: > Mar 18 08:52:03.819888 master-0 kubenswrapper[6976]: E0318 08:52:03.819709 6976 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 08:52:03.819888 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-monitoring-operator-58845fbb57-8vfjr_openshift-monitoring_09269324-c908-474d-818f-5cd49406f1e2_0(e7a4bc8b8140515d85a93fdd50c1376ee00e33b83350e67d02a91e6c70a98465): error adding pod openshift-monitoring_cluster-monitoring-operator-58845fbb57-8vfjr to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e7a4bc8b8140515d85a93fdd50c1376ee00e33b83350e67d02a91e6c70a98465" Netns:"/var/run/netns/6d031b9b-7c8f-468c-97dc-87cfe5ddfb7a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=cluster-monitoring-operator-58845fbb57-8vfjr;K8S_POD_INFRA_CONTAINER_ID=e7a4bc8b8140515d85a93fdd50c1376ee00e33b83350e67d02a91e6c70a98465;K8S_POD_UID=09269324-c908-474d-818f-5cd49406f1e2" Path:"" ERRORED: error configuring pod [openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr] networking: Multus: [openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr/09269324-c908-474d-818f-5cd49406f1e2]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-monitoring-operator-58845fbb57-8vfjr in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-monitoring-operator-58845fbb57-8vfjr in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods cluster-monitoring-operator-58845fbb57-8vfjr) Mar 18 08:52:03.819888 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:52:03.819888 master-0 kubenswrapper[6976]: > pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:52:03.819888 master-0 kubenswrapper[6976]: E0318 08:52:03.819793 6976 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 08:52:03.819888 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-monitoring-operator-58845fbb57-8vfjr_openshift-monitoring_09269324-c908-474d-818f-5cd49406f1e2_0(e7a4bc8b8140515d85a93fdd50c1376ee00e33b83350e67d02a91e6c70a98465): error adding pod openshift-monitoring_cluster-monitoring-operator-58845fbb57-8vfjr to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e7a4bc8b8140515d85a93fdd50c1376ee00e33b83350e67d02a91e6c70a98465" Netns:"/var/run/netns/6d031b9b-7c8f-468c-97dc-87cfe5ddfb7a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=cluster-monitoring-operator-58845fbb57-8vfjr;K8S_POD_INFRA_CONTAINER_ID=e7a4bc8b8140515d85a93fdd50c1376ee00e33b83350e67d02a91e6c70a98465;K8S_POD_UID=09269324-c908-474d-818f-5cd49406f1e2" Path:"" ERRORED: error configuring pod [openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr] networking: Multus: [openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr/09269324-c908-474d-818f-5cd49406f1e2]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-monitoring-operator-58845fbb57-8vfjr in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-monitoring-operator-58845fbb57-8vfjr in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods cluster-monitoring-operator-58845fbb57-8vfjr) Mar 18 08:52:03.819888 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:52:03.819888 master-0 kubenswrapper[6976]: > pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:52:03.820129 master-0 kubenswrapper[6976]: E0318 08:52:03.819877 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-monitoring-operator-58845fbb57-8vfjr_openshift-monitoring(09269324-c908-474d-818f-5cd49406f1e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-monitoring-operator-58845fbb57-8vfjr_openshift-monitoring(09269324-c908-474d-818f-5cd49406f1e2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-monitoring-operator-58845fbb57-8vfjr_openshift-monitoring_09269324-c908-474d-818f-5cd49406f1e2_0(e7a4bc8b8140515d85a93fdd50c1376ee00e33b83350e67d02a91e6c70a98465): error adding pod openshift-monitoring_cluster-monitoring-operator-58845fbb57-8vfjr to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"e7a4bc8b8140515d85a93fdd50c1376ee00e33b83350e67d02a91e6c70a98465\\\" Netns:\\\"/var/run/netns/6d031b9b-7c8f-468c-97dc-87cfe5ddfb7a\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=cluster-monitoring-operator-58845fbb57-8vfjr;K8S_POD_INFRA_CONTAINER_ID=e7a4bc8b8140515d85a93fdd50c1376ee00e33b83350e67d02a91e6c70a98465;K8S_POD_UID=09269324-c908-474d-818f-5cd49406f1e2\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr] networking: Multus: [openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr/09269324-c908-474d-818f-5cd49406f1e2]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-monitoring-operator-58845fbb57-8vfjr in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-monitoring-operator-58845fbb57-8vfjr in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods cluster-monitoring-operator-58845fbb57-8vfjr)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" podUID="09269324-c908-474d-818f-5cd49406f1e2" Mar 18 08:52:03.911205 master-0 kubenswrapper[6976]: E0318 08:52:03.911122 6976 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 08:52:03.911205 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-7b95f86987-k6xp5_openshift-operator-lifecycle-manager_2d0da6e3-3887-4361-8eae-e7447f9ff72c_0(11c0c503ed8f91ceaf22e1c38d706041c97661838c8dd0505a81d9202062ca01): error adding pod openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-k6xp5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"11c0c503ed8f91ceaf22e1c38d706041c97661838c8dd0505a81d9202062ca01" Netns:"/var/run/netns/d51dc7b7-647f-474d-b89e-4d21bbc8af3f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-7b95f86987-k6xp5;K8S_POD_INFRA_CONTAINER_ID=11c0c503ed8f91ceaf22e1c38d706041c97661838c8dd0505a81d9202062ca01;K8S_POD_UID=2d0da6e3-3887-4361-8eae-e7447f9ff72c" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5/2d0da6e3-3887-4361-8eae-e7447f9ff72c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-7b95f86987-k6xp5 in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-7b95f86987-k6xp5 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-7b95f86987-k6xp5?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:52:03.911205 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:52:03.911205 master-0 kubenswrapper[6976]: > Mar 18 08:52:03.911689 master-0 kubenswrapper[6976]: E0318 08:52:03.911210 6976 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 08:52:03.911689 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-7b95f86987-k6xp5_openshift-operator-lifecycle-manager_2d0da6e3-3887-4361-8eae-e7447f9ff72c_0(11c0c503ed8f91ceaf22e1c38d706041c97661838c8dd0505a81d9202062ca01): error adding pod openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-k6xp5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"11c0c503ed8f91ceaf22e1c38d706041c97661838c8dd0505a81d9202062ca01" Netns:"/var/run/netns/d51dc7b7-647f-474d-b89e-4d21bbc8af3f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-7b95f86987-k6xp5;K8S_POD_INFRA_CONTAINER_ID=11c0c503ed8f91ceaf22e1c38d706041c97661838c8dd0505a81d9202062ca01;K8S_POD_UID=2d0da6e3-3887-4361-8eae-e7447f9ff72c" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5/2d0da6e3-3887-4361-8eae-e7447f9ff72c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-7b95f86987-k6xp5 in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-7b95f86987-k6xp5 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-7b95f86987-k6xp5?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:52:03.911689 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:52:03.911689 master-0 kubenswrapper[6976]: > pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:52:03.911689 master-0 kubenswrapper[6976]: E0318 08:52:03.911235 6976 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 08:52:03.911689 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-7b95f86987-k6xp5_openshift-operator-lifecycle-manager_2d0da6e3-3887-4361-8eae-e7447f9ff72c_0(11c0c503ed8f91ceaf22e1c38d706041c97661838c8dd0505a81d9202062ca01): error adding pod openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-k6xp5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"11c0c503ed8f91ceaf22e1c38d706041c97661838c8dd0505a81d9202062ca01" Netns:"/var/run/netns/d51dc7b7-647f-474d-b89e-4d21bbc8af3f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-7b95f86987-k6xp5;K8S_POD_INFRA_CONTAINER_ID=11c0c503ed8f91ceaf22e1c38d706041c97661838c8dd0505a81d9202062ca01;K8S_POD_UID=2d0da6e3-3887-4361-8eae-e7447f9ff72c" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5/2d0da6e3-3887-4361-8eae-e7447f9ff72c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-7b95f86987-k6xp5 in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-7b95f86987-k6xp5 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-7b95f86987-k6xp5?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:52:03.911689 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:52:03.911689 master-0 kubenswrapper[6976]: > pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:52:03.911689 master-0 kubenswrapper[6976]: E0318 08:52:03.911319 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"package-server-manager-7b95f86987-k6xp5_openshift-operator-lifecycle-manager(2d0da6e3-3887-4361-8eae-e7447f9ff72c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"package-server-manager-7b95f86987-k6xp5_openshift-operator-lifecycle-manager(2d0da6e3-3887-4361-8eae-e7447f9ff72c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_package-server-manager-7b95f86987-k6xp5_openshift-operator-lifecycle-manager_2d0da6e3-3887-4361-8eae-e7447f9ff72c_0(11c0c503ed8f91ceaf22e1c38d706041c97661838c8dd0505a81d9202062ca01): error adding pod openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-k6xp5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"11c0c503ed8f91ceaf22e1c38d706041c97661838c8dd0505a81d9202062ca01\\\" Netns:\\\"/var/run/netns/d51dc7b7-647f-474d-b89e-4d21bbc8af3f\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=package-server-manager-7b95f86987-k6xp5;K8S_POD_INFRA_CONTAINER_ID=11c0c503ed8f91ceaf22e1c38d706041c97661838c8dd0505a81d9202062ca01;K8S_POD_UID=2d0da6e3-3887-4361-8eae-e7447f9ff72c\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5] networking: Multus: [openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5/2d0da6e3-3887-4361-8eae-e7447f9ff72c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod package-server-manager-7b95f86987-k6xp5 in out of cluster comm: SetNetworkStatus: failed to update the pod package-server-manager-7b95f86987-k6xp5 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-7b95f86987-k6xp5?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" podUID="2d0da6e3-3887-4361-8eae-e7447f9ff72c" Mar 18 08:52:03.931626 master-0 kubenswrapper[6976]: E0318 08:52:03.931552 6976 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 08:52:03.931626 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-5dbbb8b86f-25rbq_openshift-multus_7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac_0(365ecf57edbdffcdb419856c4e48f97a1f77ba36f3295d46169c4063018d630a): error adding pod openshift-multus_multus-admission-controller-5dbbb8b86f-25rbq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"365ecf57edbdffcdb419856c4e48f97a1f77ba36f3295d46169c4063018d630a" Netns:"/var/run/netns/d9111732-193c-48ea-9d2b-a4ad23d8df0e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-5dbbb8b86f-25rbq;K8S_POD_INFRA_CONTAINER_ID=365ecf57edbdffcdb419856c4e48f97a1f77ba36f3295d46169c4063018d630a;K8S_POD_UID=7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq] networking: Multus: [openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod multus-admission-controller-5dbbb8b86f-25rbq in out of cluster comm: SetNetworkStatus: failed to update the pod multus-admission-controller-5dbbb8b86f-25rbq in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-5dbbb8b86f-25rbq?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:52:03.931626 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:52:03.931626 master-0 kubenswrapper[6976]: > Mar 18 08:52:03.931842 master-0 kubenswrapper[6976]: E0318 08:52:03.931651 6976 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 08:52:03.931842 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-5dbbb8b86f-25rbq_openshift-multus_7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac_0(365ecf57edbdffcdb419856c4e48f97a1f77ba36f3295d46169c4063018d630a): error adding pod openshift-multus_multus-admission-controller-5dbbb8b86f-25rbq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"365ecf57edbdffcdb419856c4e48f97a1f77ba36f3295d46169c4063018d630a" Netns:"/var/run/netns/d9111732-193c-48ea-9d2b-a4ad23d8df0e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-5dbbb8b86f-25rbq;K8S_POD_INFRA_CONTAINER_ID=365ecf57edbdffcdb419856c4e48f97a1f77ba36f3295d46169c4063018d630a;K8S_POD_UID=7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq] networking: Multus: [openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod multus-admission-controller-5dbbb8b86f-25rbq in out of cluster comm: SetNetworkStatus: failed to update the pod multus-admission-controller-5dbbb8b86f-25rbq in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-5dbbb8b86f-25rbq?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:52:03.931842 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:52:03.931842 master-0 kubenswrapper[6976]: > pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:52:03.931842 master-0 kubenswrapper[6976]: E0318 08:52:03.931675 6976 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 08:52:03.931842 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-5dbbb8b86f-25rbq_openshift-multus_7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac_0(365ecf57edbdffcdb419856c4e48f97a1f77ba36f3295d46169c4063018d630a): error adding pod openshift-multus_multus-admission-controller-5dbbb8b86f-25rbq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"365ecf57edbdffcdb419856c4e48f97a1f77ba36f3295d46169c4063018d630a" Netns:"/var/run/netns/d9111732-193c-48ea-9d2b-a4ad23d8df0e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-5dbbb8b86f-25rbq;K8S_POD_INFRA_CONTAINER_ID=365ecf57edbdffcdb419856c4e48f97a1f77ba36f3295d46169c4063018d630a;K8S_POD_UID=7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq] networking: Multus: [openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod multus-admission-controller-5dbbb8b86f-25rbq in out of cluster comm: SetNetworkStatus: failed to update the pod multus-admission-controller-5dbbb8b86f-25rbq in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-5dbbb8b86f-25rbq?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:52:03.931842 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:52:03.931842 master-0 kubenswrapper[6976]: > pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:52:03.931842 master-0 kubenswrapper[6976]: E0318 08:52:03.931764 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"multus-admission-controller-5dbbb8b86f-25rbq_openshift-multus(7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"multus-admission-controller-5dbbb8b86f-25rbq_openshift-multus(7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-5dbbb8b86f-25rbq_openshift-multus_7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac_0(365ecf57edbdffcdb419856c4e48f97a1f77ba36f3295d46169c4063018d630a): error adding pod openshift-multus_multus-admission-controller-5dbbb8b86f-25rbq to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"365ecf57edbdffcdb419856c4e48f97a1f77ba36f3295d46169c4063018d630a\\\" Netns:\\\"/var/run/netns/d9111732-193c-48ea-9d2b-a4ad23d8df0e\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-5dbbb8b86f-25rbq;K8S_POD_INFRA_CONTAINER_ID=365ecf57edbdffcdb419856c4e48f97a1f77ba36f3295d46169c4063018d630a;K8S_POD_UID=7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq] networking: Multus: [openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod multus-admission-controller-5dbbb8b86f-25rbq in out of cluster comm: SetNetworkStatus: failed to update the pod multus-admission-controller-5dbbb8b86f-25rbq in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-5dbbb8b86f-25rbq?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" podUID="7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac" Mar 18 08:52:03.950414 master-0 kubenswrapper[6976]: E0318 08:52:03.950360 6976 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 18 08:52:03.950414 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-89ccd998f-m862c_openshift-marketplace_ca9d4694-8675-47c5-819f-89bba9dcdc0f_0(c40b284abd033ef162c8e8cfa5a93fa113b510d85827d5ffa942ba9a5cffb190): error adding pod openshift-marketplace_marketplace-operator-89ccd998f-m862c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c40b284abd033ef162c8e8cfa5a93fa113b510d85827d5ffa942ba9a5cffb190" Netns:"/var/run/netns/4bcd2019-7a71-4615-9de5-ce9784728836" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-89ccd998f-m862c;K8S_POD_INFRA_CONTAINER_ID=c40b284abd033ef162c8e8cfa5a93fa113b510d85827d5ffa942ba9a5cffb190;K8S_POD_UID=ca9d4694-8675-47c5-819f-89bba9dcdc0f" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-89ccd998f-m862c] networking: Multus: [openshift-marketplace/marketplace-operator-89ccd998f-m862c/ca9d4694-8675-47c5-819f-89bba9dcdc0f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-89ccd998f-m862c in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-89ccd998f-m862c in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-89ccd998f-m862c?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:52:03.950414 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:52:03.950414 master-0 kubenswrapper[6976]: > Mar 18 08:52:03.950619 master-0 kubenswrapper[6976]: E0318 08:52:03.950438 6976 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 18 08:52:03.950619 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-89ccd998f-m862c_openshift-marketplace_ca9d4694-8675-47c5-819f-89bba9dcdc0f_0(c40b284abd033ef162c8e8cfa5a93fa113b510d85827d5ffa942ba9a5cffb190): error adding pod openshift-marketplace_marketplace-operator-89ccd998f-m862c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c40b284abd033ef162c8e8cfa5a93fa113b510d85827d5ffa942ba9a5cffb190" Netns:"/var/run/netns/4bcd2019-7a71-4615-9de5-ce9784728836" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-89ccd998f-m862c;K8S_POD_INFRA_CONTAINER_ID=c40b284abd033ef162c8e8cfa5a93fa113b510d85827d5ffa942ba9a5cffb190;K8S_POD_UID=ca9d4694-8675-47c5-819f-89bba9dcdc0f" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-89ccd998f-m862c] networking: Multus: [openshift-marketplace/marketplace-operator-89ccd998f-m862c/ca9d4694-8675-47c5-819f-89bba9dcdc0f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-89ccd998f-m862c in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-89ccd998f-m862c in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-89ccd998f-m862c?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:52:03.950619 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:52:03.950619 master-0 kubenswrapper[6976]: > pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:52:03.950619 master-0 kubenswrapper[6976]: E0318 08:52:03.950458 6976 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 18 08:52:03.950619 master-0 kubenswrapper[6976]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-89ccd998f-m862c_openshift-marketplace_ca9d4694-8675-47c5-819f-89bba9dcdc0f_0(c40b284abd033ef162c8e8cfa5a93fa113b510d85827d5ffa942ba9a5cffb190): error adding pod openshift-marketplace_marketplace-operator-89ccd998f-m862c to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c40b284abd033ef162c8e8cfa5a93fa113b510d85827d5ffa942ba9a5cffb190" Netns:"/var/run/netns/4bcd2019-7a71-4615-9de5-ce9784728836" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-89ccd998f-m862c;K8S_POD_INFRA_CONTAINER_ID=c40b284abd033ef162c8e8cfa5a93fa113b510d85827d5ffa942ba9a5cffb190;K8S_POD_UID=ca9d4694-8675-47c5-819f-89bba9dcdc0f" Path:"" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-89ccd998f-m862c] networking: Multus: [openshift-marketplace/marketplace-operator-89ccd998f-m862c/ca9d4694-8675-47c5-819f-89bba9dcdc0f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-89ccd998f-m862c in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-89ccd998f-m862c in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-89ccd998f-m862c?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 18 08:52:03.950619 master-0 kubenswrapper[6976]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 18 08:52:03.950619 master-0 kubenswrapper[6976]: > pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:52:03.950619 master-0 kubenswrapper[6976]: E0318 08:52:03.950525 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"marketplace-operator-89ccd998f-m862c_openshift-marketplace(ca9d4694-8675-47c5-819f-89bba9dcdc0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"marketplace-operator-89ccd998f-m862c_openshift-marketplace(ca9d4694-8675-47c5-819f-89bba9dcdc0f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_marketplace-operator-89ccd998f-m862c_openshift-marketplace_ca9d4694-8675-47c5-819f-89bba9dcdc0f_0(c40b284abd033ef162c8e8cfa5a93fa113b510d85827d5ffa942ba9a5cffb190): error adding pod openshift-marketplace_marketplace-operator-89ccd998f-m862c to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"c40b284abd033ef162c8e8cfa5a93fa113b510d85827d5ffa942ba9a5cffb190\\\" Netns:\\\"/var/run/netns/4bcd2019-7a71-4615-9de5-ce9784728836\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=marketplace-operator-89ccd998f-m862c;K8S_POD_INFRA_CONTAINER_ID=c40b284abd033ef162c8e8cfa5a93fa113b510d85827d5ffa942ba9a5cffb190;K8S_POD_UID=ca9d4694-8675-47c5-819f-89bba9dcdc0f\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/marketplace-operator-89ccd998f-m862c] networking: Multus: [openshift-marketplace/marketplace-operator-89ccd998f-m862c/ca9d4694-8675-47c5-819f-89bba9dcdc0f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod marketplace-operator-89ccd998f-m862c in out of cluster comm: SetNetworkStatus: failed to update the pod marketplace-operator-89ccd998f-m862c in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-89ccd998f-m862c?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" podUID="ca9d4694-8675-47c5-819f-89bba9dcdc0f" Mar 18 08:52:04.706918 master-0 kubenswrapper[6976]: I0318 08:52:04.706862 6976 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-xfqsm container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.35:8081/readyz\": dial tcp 10.128.0.35:8081: connect: connection refused" start-of-body= Mar 18 08:52:04.708002 master-0 kubenswrapper[6976]: I0318 08:52:04.706867 6976 patch_prober.go:28] interesting pod/operator-controller-controller-manager-57777556ff-xfqsm container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.35:8081/healthz\": dial tcp 10.128.0.35:8081: connect: connection refused" start-of-body= Mar 18 08:52:04.708002 master-0 kubenswrapper[6976]: I0318 08:52:04.706979 6976 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" podUID="800297fe-77fd-4f58-ade2-32a147cd7d5c" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.35:8081/readyz\": dial tcp 10.128.0.35:8081: connect: connection refused" Mar 18 08:52:04.708002 master-0 kubenswrapper[6976]: I0318 08:52:04.707186 6976 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" podUID="800297fe-77fd-4f58-ade2-32a147cd7d5c" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.35:8081/healthz\": dial tcp 10.128.0.35:8081: connect: connection refused" Mar 18 08:52:05.365501 master-0 kubenswrapper[6976]: I0318 08:52:05.365422 6976 generic.go:334] "Generic (PLEG): container finished" podID="7cac1300-44c1-4a7d-8d14-efa9702ad9df" containerID="fdb4bcca892ef3b8b38b6412f754f472839917394e632bf7ec218fe086926be2" exitCode=0 Mar 18 08:52:06.123932 master-0 kubenswrapper[6976]: I0318 08:52:06.123852 6976 patch_prober.go:28] interesting pod/catalogd-controller-manager-6864dc98f7-vbxdw container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.34:8081/readyz\": dial tcp 10.128.0.34:8081: connect: connection refused" start-of-body= Mar 18 08:52:06.124750 master-0 kubenswrapper[6976]: I0318 08:52:06.123957 6976 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" podUID="411d544f-e105-44f0-927a-f61406b3f070" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.34:8081/readyz\": dial tcp 10.128.0.34:8081: connect: connection refused" Mar 18 08:52:06.124750 master-0 kubenswrapper[6976]: I0318 08:52:06.123852 6976 patch_prober.go:28] interesting pod/catalogd-controller-manager-6864dc98f7-vbxdw container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.34:8081/healthz\": dial tcp 10.128.0.34:8081: connect: connection refused" start-of-body= Mar 18 08:52:06.124750 master-0 kubenswrapper[6976]: I0318 08:52:06.124101 6976 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" podUID="411d544f-e105-44f0-927a-f61406b3f070" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.34:8081/healthz\": dial tcp 10.128.0.34:8081: connect: connection refused" Mar 18 08:52:07.377728 master-0 kubenswrapper[6976]: I0318 08:52:07.377628 6976 generic.go:334] "Generic (PLEG): container finished" podID="59c421f2-2154-47eb-bf86-e5fe1b980d76" containerID="f7406136c7d1b5446d31fb2d477916274551fd8657f89454d9fad0aeccedb87c" exitCode=0 Mar 18 08:52:09.571599 master-0 kubenswrapper[6976]: I0318 08:52:09.571443 6976 patch_prober.go:28] interesting pod/controller-manager-7c945f8f5b-967lx container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 18 08:52:09.573411 master-0 kubenswrapper[6976]: I0318 08:52:09.571654 6976 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" podUID="59c421f2-2154-47eb-bf86-e5fe1b980d76" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 18 08:52:09.573411 master-0 kubenswrapper[6976]: I0318 08:52:09.571444 6976 patch_prober.go:28] interesting pod/controller-manager-7c945f8f5b-967lx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 18 08:52:09.573411 master-0 kubenswrapper[6976]: I0318 08:52:09.571818 6976 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" podUID="59c421f2-2154-47eb-bf86-e5fe1b980d76" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 18 08:52:10.667526 master-0 kubenswrapper[6976]: E0318 08:52:10.667468 6976 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 18 08:52:10.668176 master-0 kubenswrapper[6976]: E0318 08:52:10.667659 6976 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.009s" Mar 18 08:52:10.668176 master-0 kubenswrapper[6976]: I0318 08:52:10.667737 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 08:52:10.668176 master-0 kubenswrapper[6976]: I0318 08:52:10.667808 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:52:10.668176 master-0 kubenswrapper[6976]: I0318 08:52:10.667836 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" event={"ID":"65cff83a-8d8f-4e4f-96ef-99941c29ba53","Type":"ContainerDied","Data":"e7040e73164a56f089f0acc8e8f60bd6ac708b6b6770784a34fbb303688099ef"} Mar 18 08:52:10.668388 master-0 kubenswrapper[6976]: I0318 08:52:10.668371 6976 scope.go:117] "RemoveContainer" containerID="bc52f72875ab784115d2ae7cf81aabfc20eff1b537ca6458d743902aaf4541e4" Mar 18 08:52:10.668769 master-0 kubenswrapper[6976]: I0318 08:52:10.668691 6976 scope.go:117] "RemoveContainer" containerID="177f16090fa41cba4e3892f17219367dee40fa3695daf9c589750f25c0f6d328" Mar 18 08:52:10.675064 master-0 kubenswrapper[6976]: I0318 08:52:10.675019 6976 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 08:52:11.404095 master-0 kubenswrapper[6976]: I0318 08:52:11.404044 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-vbxdw_411d544f-e105-44f0-927a-f61406b3f070/manager/0.log" Mar 18 08:52:11.407841 master-0 kubenswrapper[6976]: I0318 08:52:11.407789 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-xfqsm_800297fe-77fd-4f58-ade2-32a147cd7d5c/manager/0.log" Mar 18 08:52:12.684399 master-0 kubenswrapper[6976]: E0318 08:52:12.684244 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:52:19.254336 master-0 kubenswrapper[6976]: E0318 08:52:19.254224 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 08:52:19.571590 master-0 kubenswrapper[6976]: I0318 08:52:19.571408 6976 patch_prober.go:28] interesting pod/controller-manager-7c945f8f5b-967lx container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 18 08:52:19.571590 master-0 kubenswrapper[6976]: I0318 08:52:19.571492 6976 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" podUID="59c421f2-2154-47eb-bf86-e5fe1b980d76" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 18 08:52:19.572008 master-0 kubenswrapper[6976]: I0318 08:52:19.571542 6976 patch_prober.go:28] interesting pod/controller-manager-7c945f8f5b-967lx container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" start-of-body= Mar 18 08:52:19.572008 master-0 kubenswrapper[6976]: I0318 08:52:19.571710 6976 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" podUID="59c421f2-2154-47eb-bf86-e5fe1b980d76" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.48:8443/healthz\": dial tcp 10.128.0.48:8443: connect: connection refused" Mar 18 08:52:19.793943 master-0 kubenswrapper[6976]: E0318 08:52:19.793831 6976 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.126s" Mar 18 08:52:19.794278 master-0 kubenswrapper[6976]: I0318 08:52:19.794179 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:52:19.794510 master-0 kubenswrapper[6976]: I0318 08:52:19.794474 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:52:19.794857 master-0 kubenswrapper[6976]: I0318 08:52:19.794481 6976 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 08:52:19.794857 master-0 kubenswrapper[6976]: I0318 08:52:19.794605 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" event={"ID":"e86268c9-7a83-4ccb-979a-feff00cb4b3e","Type":"ContainerDied","Data":"3d8ad760fd65b5f908f215917d29cb979183ae8b744d77b212db9eae8e3db7ef"} Mar 18 08:52:19.794857 master-0 kubenswrapper[6976]: I0318 08:52:19.794637 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:52:19.794857 master-0 kubenswrapper[6976]: I0318 08:52:19.794777 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:52:19.795375 master-0 kubenswrapper[6976]: I0318 08:52:19.795131 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:52:19.795375 master-0 kubenswrapper[6976]: I0318 08:52:19.795275 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:52:19.795690 master-0 kubenswrapper[6976]: I0318 08:52:19.795653 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 08:52:19.796053 master-0 kubenswrapper[6976]: I0318 08:52:19.795651 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:52:19.796053 master-0 kubenswrapper[6976]: I0318 08:52:19.796029 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:52:19.797634 master-0 kubenswrapper[6976]: I0318 08:52:19.796395 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:52:19.797634 master-0 kubenswrapper[6976]: I0318 08:52:19.796809 6976 scope.go:117] "RemoveContainer" containerID="3d8ad760fd65b5f908f215917d29cb979183ae8b744d77b212db9eae8e3db7ef" Mar 18 08:52:19.797634 master-0 kubenswrapper[6976]: I0318 08:52:19.797363 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 08:52:19.797958 master-0 kubenswrapper[6976]: I0318 08:52:19.797817 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:52:19.799960 master-0 kubenswrapper[6976]: I0318 08:52:19.799336 6976 scope.go:117] "RemoveContainer" containerID="fc3bba74c1c5dfc4469c628e1ccd99032fb59aaf6362379db3f1337bbf0219a6" Mar 18 08:52:19.800951 master-0 kubenswrapper[6976]: I0318 08:52:19.800911 6976 scope.go:117] "RemoveContainer" containerID="94a4ad92cd3b53ae4641e35e7fd4ec8fccd8630c21c0fc3c12a574e02645e3da" Mar 18 08:52:19.802326 master-0 kubenswrapper[6976]: I0318 08:52:19.802212 6976 scope.go:117] "RemoveContainer" containerID="fdb4bcca892ef3b8b38b6412f754f472839917394e632bf7ec218fe086926be2" Mar 18 08:52:19.809960 master-0 kubenswrapper[6976]: I0318 08:52:19.807708 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:52:19.809960 master-0 kubenswrapper[6976]: I0318 08:52:19.808029 6976 scope.go:117] "RemoveContainer" containerID="9c9d46ecc19961b32a9a632092c439cef6feaecffc62b43586ab2e3093d0896c" Mar 18 08:52:19.810212 master-0 kubenswrapper[6976]: I0318 08:52:19.810155 6976 scope.go:117] "RemoveContainer" containerID="9cdce5f3b67476e4d83692d6a7f121d082ca7bc4e1f5227b44f8955003a46e71" Mar 18 08:52:19.811706 master-0 kubenswrapper[6976]: I0318 08:52:19.810878 6976 scope.go:117] "RemoveContainer" containerID="b7023722fb31c9ade901bb4f5f5537f159e85f319ef882c910c37283dbf679ec" Mar 18 08:52:19.811706 master-0 kubenswrapper[6976]: I0318 08:52:19.810922 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 08:52:19.811706 master-0 kubenswrapper[6976]: I0318 08:52:19.811061 6976 scope.go:117] "RemoveContainer" containerID="35bec5aad4d31f588044876420b3abf5aa56e6a349124b911e43ef3a01a96e33" Mar 18 08:52:19.816292 master-0 kubenswrapper[6976]: I0318 08:52:19.815844 6976 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 18 08:52:19.820238 master-0 kubenswrapper[6976]: I0318 08:52:19.820164 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" event={"ID":"be2682e4-cb63-4102-a83e-ef28023e273a","Type":"ContainerDied","Data":"8ff399eba975fe3e4ac2c3d81b3e52845b1835ad72d3a17e7e74d5e7eca9397d"} Mar 18 08:52:19.820339 master-0 kubenswrapper[6976]: I0318 08:52:19.820243 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" event={"ID":"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac","Type":"ContainerDied","Data":"cdf9805777db651916bc0fbdb03aeca74e0291990d89a5792cd9c2058bcbad82"} Mar 18 08:52:19.820339 master-0 kubenswrapper[6976]: I0318 08:52:19.820281 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" event={"ID":"bb6ef4c4-bff3-4559-8e42-582bbd668b7c","Type":"ContainerDied","Data":"9cdce5f3b67476e4d83692d6a7f121d082ca7bc4e1f5227b44f8955003a46e71"} Mar 18 08:52:19.820339 master-0 kubenswrapper[6976]: I0318 08:52:19.820318 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" event={"ID":"0f9ba06c-7a6b-4f46-a747-80b0a0b58600","Type":"ContainerDied","Data":"e101758dad1868c5a7ecd290b1cfffd6e710b7c13cfdccb7b41fe00e23534e6d"} Mar 18 08:52:19.820720 master-0 kubenswrapper[6976]: I0318 08:52:19.820347 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-lf7kq" event={"ID":"57affd8b-d1ce-40d2-b31e-7b18645ca7b6","Type":"ContainerDied","Data":"7a5f71287e8b5eb717808046e6ba2bfb7e60eb4819b757b6fc0b37c1ed02f420"} Mar 18 08:52:19.820720 master-0 kubenswrapper[6976]: I0318 08:52:19.820374 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"e33371633d805977fa012e618334c5eea89b46efb1d6f0253d45e92e47bbf964"} Mar 18 08:52:19.820720 master-0 kubenswrapper[6976]: I0318 08:52:19.820401 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"fc3bba74c1c5dfc4469c628e1ccd99032fb59aaf6362379db3f1337bbf0219a6"} Mar 18 08:52:19.820720 master-0 kubenswrapper[6976]: I0318 08:52:19.820423 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" event={"ID":"e86268c9-7a83-4ccb-979a-feff00cb4b3e","Type":"ContainerStarted","Data":"9c9d46ecc19961b32a9a632092c439cef6feaecffc62b43586ab2e3093d0896c"} Mar 18 08:52:19.820720 master-0 kubenswrapper[6976]: I0318 08:52:19.820446 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" event={"ID":"81eefe1b-f683-4740-8fb0-0a5050f9b4a4","Type":"ContainerDied","Data":"b07a3a34e91709be9071f795c0e0650539cb11f6bc35fb3bec049b4bc3051c6c"} Mar 18 08:52:19.820720 master-0 kubenswrapper[6976]: I0318 08:52:19.820471 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"3253d87f-ae48-42cf-950f-f508a9b82d0d","Type":"ContainerDied","Data":"f4700f538c7d454f7c9d134fd47d7a5c2ce673d0b9bd02c96a2dfc730672550e"} Mar 18 08:52:19.820720 master-0 kubenswrapper[6976]: I0318 08:52:19.820495 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" event={"ID":"65cff83a-8d8f-4e4f-96ef-99941c29ba53","Type":"ContainerStarted","Data":"2e60113a55bc3fdf5ffd475c0a2b9ffa85c87d1620b1886f6cf55bbb6b1809ed"} Mar 18 08:52:19.820720 master-0 kubenswrapper[6976]: I0318 08:52:19.820518 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" event={"ID":"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b","Type":"ContainerStarted","Data":"88991e3930254d3b149944c85afc57bb3f7cc44aa37269c1606831ad4c12dd71"} Mar 18 08:52:19.820720 master-0 kubenswrapper[6976]: I0318 08:52:19.820540 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-lf7kq" event={"ID":"57affd8b-d1ce-40d2-b31e-7b18645ca7b6","Type":"ContainerStarted","Data":"8adfaf98ac3f7666cf99c8210bf62f09cc200963ab9628e3f3b8887a2ea80d44"} Mar 18 08:52:19.820720 master-0 kubenswrapper[6976]: I0318 08:52:19.820597 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"3253d87f-ae48-42cf-950f-f508a9b82d0d","Type":"ContainerDied","Data":"6669c488a020cf374cca62487f896819e27005e13ddd29853b483ea8a721d767"} Mar 18 08:52:19.820720 master-0 kubenswrapper[6976]: I0318 08:52:19.820624 6976 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6669c488a020cf374cca62487f896819e27005e13ddd29853b483ea8a721d767" Mar 18 08:52:19.820720 master-0 kubenswrapper[6976]: I0318 08:52:19.820646 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"38b830ff-8938-4f21-8977-c29a19c85afb","Type":"ContainerDied","Data":"4eeb3f8508d8d3c4f3d88616faaf160c40c1688d847f4d4385e29255722ded89"} Mar 18 08:52:19.820720 master-0 kubenswrapper[6976]: I0318 08:52:19.820667 6976 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4eeb3f8508d8d3c4f3d88616faaf160c40c1688d847f4d4385e29255722ded89" Mar 18 08:52:19.820720 master-0 kubenswrapper[6976]: I0318 08:52:19.820690 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"b75d3625-4131-465d-a8e2-4c42588c7630","Type":"ContainerDied","Data":"a3d7e4fd3a2cab558b1ebece0211a1e0de8af572fefd420da566dc2b08839acd"} Mar 18 08:52:19.820720 master-0 kubenswrapper[6976]: I0318 08:52:19.820715 6976 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3d7e4fd3a2cab558b1ebece0211a1e0de8af572fefd420da566dc2b08839acd" Mar 18 08:52:19.820720 master-0 kubenswrapper[6976]: I0318 08:52:19.820734 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" event={"ID":"0f9ba06c-7a6b-4f46-a747-80b0a0b58600","Type":"ContainerStarted","Data":"2cfc620769df1869217ef2bafc4fb4d7ac92515611935bd9cfb8d767d6392d6b"} Mar 18 08:52:19.821799 master-0 kubenswrapper[6976]: I0318 08:52:19.820757 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" event={"ID":"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac","Type":"ContainerStarted","Data":"33e0c0fa477ce3a082850936be336ae3c69e7dc9385f227bc893cfb947394012"} Mar 18 08:52:19.821799 master-0 kubenswrapper[6976]: I0318 08:52:19.820781 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" event={"ID":"be2682e4-cb63-4102-a83e-ef28023e273a","Type":"ContainerStarted","Data":"9386051748bed6f19f4cd27daf0e83a55db215d4d03fde7c071e527ef1bc497c"} Mar 18 08:52:19.821799 master-0 kubenswrapper[6976]: I0318 08:52:19.820806 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" event={"ID":"81eefe1b-f683-4740-8fb0-0a5050f9b4a4","Type":"ContainerStarted","Data":"f271faf0d7c55de8efcccdde7688825092dfb7f1d00e1599288466a5a990a816"} Mar 18 08:52:19.821799 master-0 kubenswrapper[6976]: I0318 08:52:19.820828 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" event={"ID":"0f6a7f55-84bd-4ea5-8248-4cb565904c3b","Type":"ContainerDied","Data":"66cbf701fabf0e0f193e14614de147bfd5b674f1f5978178edd97cd8b89c12a4"} Mar 18 08:52:19.821799 master-0 kubenswrapper[6976]: I0318 08:52:19.820855 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" event={"ID":"800297fe-77fd-4f58-ade2-32a147cd7d5c","Type":"ContainerDied","Data":"bc52f72875ab784115d2ae7cf81aabfc20eff1b537ca6458d743902aaf4541e4"} Mar 18 08:52:19.821799 master-0 kubenswrapper[6976]: I0318 08:52:19.820883 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" event={"ID":"411d544f-e105-44f0-927a-f61406b3f070","Type":"ContainerDied","Data":"177f16090fa41cba4e3892f17219367dee40fa3695daf9c589750f25c0f6d328"} Mar 18 08:52:19.821799 master-0 kubenswrapper[6976]: I0318 08:52:19.820910 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" event={"ID":"4e919445-81d0-4663-8941-f596d8121305","Type":"ContainerDied","Data":"b7023722fb31c9ade901bb4f5f5537f159e85f319ef882c910c37283dbf679ec"} Mar 18 08:52:19.821799 master-0 kubenswrapper[6976]: I0318 08:52:19.820934 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"fc3bba74c1c5dfc4469c628e1ccd99032fb59aaf6362379db3f1337bbf0219a6"} Mar 18 08:52:19.821799 master-0 kubenswrapper[6976]: I0318 08:52:19.820958 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" event={"ID":"bf7a3329-a04c-4b58-9364-b907c00cbe08","Type":"ContainerDied","Data":"9d25c9c9b5ced91c32a1b9dd7e48ce6b3235062e8dd7fa065d776452831b8b1b"} Mar 18 08:52:19.821799 master-0 kubenswrapper[6976]: I0318 08:52:19.820982 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" event={"ID":"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd","Type":"ContainerDied","Data":"35bec5aad4d31f588044876420b3abf5aa56e6a349124b911e43ef3a01a96e33"} Mar 18 08:52:19.821799 master-0 kubenswrapper[6976]: I0318 08:52:19.821007 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" event={"ID":"5f827195-f68d-4bd2-865b-a1f041a5c73e","Type":"ContainerDied","Data":"94a4ad92cd3b53ae4641e35e7fd4ec8fccd8630c21c0fc3c12a574e02645e3da"} Mar 18 08:52:19.821799 master-0 kubenswrapper[6976]: I0318 08:52:19.821030 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"2db73d7101a43abc812f123a338de4314d42908c424cba5f3dfda66103668e89"} Mar 18 08:52:19.821799 master-0 kubenswrapper[6976]: I0318 08:52:19.821052 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"e170620a09f67f7dd5644ef0ed06bf71397ac82649b983c533838793eeba5434"} Mar 18 08:52:19.821799 master-0 kubenswrapper[6976]: I0318 08:52:19.821074 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"1951546c85592fe98e5dbb82d2390a079377b906f6ce17c831e35dd6a20e3c5a"} Mar 18 08:52:19.821799 master-0 kubenswrapper[6976]: I0318 08:52:19.821094 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"dd65c9ff55caaa591c9ce309cbf2e71c0d904c09319b714ab36cd668cef65506"} Mar 18 08:52:19.821799 master-0 kubenswrapper[6976]: I0318 08:52:19.821113 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"9bb40497785c5f7d8d5301fe57c4b67d01320ad9570331c3ae357b52e29702f0"} Mar 18 08:52:19.821799 master-0 kubenswrapper[6976]: I0318 08:52:19.821136 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" event={"ID":"e86268c9-7a83-4ccb-979a-feff00cb4b3e","Type":"ContainerDied","Data":"9c9d46ecc19961b32a9a632092c439cef6feaecffc62b43586ab2e3093d0896c"} Mar 18 08:52:19.821799 master-0 kubenswrapper[6976]: I0318 08:52:19.821160 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" event={"ID":"7cac1300-44c1-4a7d-8d14-efa9702ad9df","Type":"ContainerDied","Data":"fdb4bcca892ef3b8b38b6412f754f472839917394e632bf7ec218fe086926be2"} Mar 18 08:52:19.821799 master-0 kubenswrapper[6976]: I0318 08:52:19.821186 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" event={"ID":"59c421f2-2154-47eb-bf86-e5fe1b980d76","Type":"ContainerDied","Data":"f7406136c7d1b5446d31fb2d477916274551fd8657f89454d9fad0aeccedb87c"} Mar 18 08:52:19.821799 master-0 kubenswrapper[6976]: I0318 08:52:19.821210 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" event={"ID":"411d544f-e105-44f0-927a-f61406b3f070","Type":"ContainerStarted","Data":"c7cfa4dec96dbca2fe125b83f44d5acd8c41f552ae5f721e4aca31bd53b0ff70"} Mar 18 08:52:19.821799 master-0 kubenswrapper[6976]: I0318 08:52:19.821231 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" event={"ID":"800297fe-77fd-4f58-ade2-32a147cd7d5c","Type":"ContainerStarted","Data":"9fa57acf7d89fed72b41cf833947aeeae5bc2aa09219f68d237536250d7030f8"} Mar 18 08:52:19.823468 master-0 kubenswrapper[6976]: I0318 08:52:19.821967 6976 scope.go:117] "RemoveContainer" containerID="f7406136c7d1b5446d31fb2d477916274551fd8657f89454d9fad0aeccedb87c" Mar 18 08:52:19.824791 master-0 kubenswrapper[6976]: I0318 08:52:19.824309 6976 scope.go:117] "RemoveContainer" containerID="66cbf701fabf0e0f193e14614de147bfd5b674f1f5978178edd97cd8b89c12a4" Mar 18 08:52:19.826100 master-0 kubenswrapper[6976]: I0318 08:52:19.825071 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:52:19.826100 master-0 kubenswrapper[6976]: I0318 08:52:19.825150 6976 scope.go:117] "RemoveContainer" containerID="9d25c9c9b5ced91c32a1b9dd7e48ce6b3235062e8dd7fa065d776452831b8b1b" Mar 18 08:52:19.827791 master-0 kubenswrapper[6976]: I0318 08:52:19.827705 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 08:52:19.837522 master-0 kubenswrapper[6976]: I0318 08:52:19.837418 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 18 08:52:19.837522 master-0 kubenswrapper[6976]: I0318 08:52:19.837495 6976 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="2efd3acb-ad15-4248-aaa6-a569caa224f2" Mar 18 08:52:19.839805 master-0 kubenswrapper[6976]: I0318 08:52:19.839727 6976 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 18 08:52:19.839805 master-0 kubenswrapper[6976]: I0318 08:52:19.839793 6976 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="2efd3acb-ad15-4248-aaa6-a569caa224f2" Mar 18 08:52:19.907611 master-0 kubenswrapper[6976]: I0318 08:52:19.907574 6976 scope.go:117] "RemoveContainer" containerID="e33371633d805977fa012e618334c5eea89b46efb1d6f0253d45e92e47bbf964" Mar 18 08:52:19.942178 master-0 kubenswrapper[6976]: I0318 08:52:19.942110 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 08:52:19.946688 master-0 kubenswrapper[6976]: I0318 08:52:19.946648 6976 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 18 08:52:20.063047 master-0 kubenswrapper[6976]: I0318 08:52:20.062657 6976 scope.go:117] "RemoveContainer" containerID="f277f2362065cc73017df28062682dd2e7050aa19629168ac1d2fca1e8c0c640" Mar 18 08:52:20.116439 master-0 kubenswrapper[6976]: I0318 08:52:20.116416 6976 scope.go:117] "RemoveContainer" containerID="e33371633d805977fa012e618334c5eea89b46efb1d6f0253d45e92e47bbf964" Mar 18 08:52:20.118877 master-0 kubenswrapper[6976]: E0318 08:52:20.118343 6976 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e33371633d805977fa012e618334c5eea89b46efb1d6f0253d45e92e47bbf964\": container with ID starting with e33371633d805977fa012e618334c5eea89b46efb1d6f0253d45e92e47bbf964 not found: ID does not exist" containerID="e33371633d805977fa012e618334c5eea89b46efb1d6f0253d45e92e47bbf964" Mar 18 08:52:20.118877 master-0 kubenswrapper[6976]: I0318 08:52:20.118371 6976 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e33371633d805977fa012e618334c5eea89b46efb1d6f0253d45e92e47bbf964"} err="failed to get container status \"e33371633d805977fa012e618334c5eea89b46efb1d6f0253d45e92e47bbf964\": rpc error: code = NotFound desc = could not find container \"e33371633d805977fa012e618334c5eea89b46efb1d6f0253d45e92e47bbf964\": container with ID starting with e33371633d805977fa012e618334c5eea89b46efb1d6f0253d45e92e47bbf964 not found: ID does not exist" Mar 18 08:52:20.118877 master-0 kubenswrapper[6976]: I0318 08:52:20.118394 6976 scope.go:117] "RemoveContainer" containerID="f277f2362065cc73017df28062682dd2e7050aa19629168ac1d2fca1e8c0c640" Mar 18 08:52:20.127126 master-0 kubenswrapper[6976]: E0318 08:52:20.126436 6976 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f277f2362065cc73017df28062682dd2e7050aa19629168ac1d2fca1e8c0c640\": container with ID starting with f277f2362065cc73017df28062682dd2e7050aa19629168ac1d2fca1e8c0c640 not found: ID does not exist" containerID="f277f2362065cc73017df28062682dd2e7050aa19629168ac1d2fca1e8c0c640" Mar 18 08:52:20.127126 master-0 kubenswrapper[6976]: I0318 08:52:20.126474 6976 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f277f2362065cc73017df28062682dd2e7050aa19629168ac1d2fca1e8c0c640"} err="failed to get container status \"f277f2362065cc73017df28062682dd2e7050aa19629168ac1d2fca1e8c0c640\": rpc error: code = NotFound desc = could not find container \"f277f2362065cc73017df28062682dd2e7050aa19629168ac1d2fca1e8c0c640\": container with ID starting with f277f2362065cc73017df28062682dd2e7050aa19629168ac1d2fca1e8c0c640 not found: ID does not exist" Mar 18 08:52:20.127126 master-0 kubenswrapper[6976]: I0318 08:52:20.126525 6976 scope.go:117] "RemoveContainer" containerID="3d8ad760fd65b5f908f215917d29cb979183ae8b744d77b212db9eae8e3db7ef" Mar 18 08:52:20.131251 master-0 kubenswrapper[6976]: E0318 08:52:20.131228 6976 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d8ad760fd65b5f908f215917d29cb979183ae8b744d77b212db9eae8e3db7ef\": container with ID starting with 3d8ad760fd65b5f908f215917d29cb979183ae8b744d77b212db9eae8e3db7ef not found: ID does not exist" containerID="3d8ad760fd65b5f908f215917d29cb979183ae8b744d77b212db9eae8e3db7ef" Mar 18 08:52:20.131338 master-0 kubenswrapper[6976]: I0318 08:52:20.131255 6976 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d8ad760fd65b5f908f215917d29cb979183ae8b744d77b212db9eae8e3db7ef"} err="failed to get container status \"3d8ad760fd65b5f908f215917d29cb979183ae8b744d77b212db9eae8e3db7ef\": rpc error: code = NotFound desc = could not find container \"3d8ad760fd65b5f908f215917d29cb979183ae8b744d77b212db9eae8e3db7ef\": container with ID starting with 3d8ad760fd65b5f908f215917d29cb979183ae8b744d77b212db9eae8e3db7ef not found: ID does not exist" Mar 18 08:52:20.306246 master-0 kubenswrapper[6976]: I0318 08:52:20.306191 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq"] Mar 18 08:52:20.328527 master-0 kubenswrapper[6976]: W0318 08:52:20.328469 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e48895e_f8cf_4e62_8b9a_5a50d8a6ccac.slice/crio-ea6882d5a36974530033e9c50aa841cb8b79a3300b3854f2b7d0678f67c4f1bf WatchSource:0}: Error finding container ea6882d5a36974530033e9c50aa841cb8b79a3300b3854f2b7d0678f67c4f1bf: Status 404 returned error can't find the container with id ea6882d5a36974530033e9c50aa841cb8b79a3300b3854f2b7d0678f67c4f1bf Mar 18 08:52:20.417308 master-0 kubenswrapper[6976]: I0318 08:52:20.416989 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l"] Mar 18 08:52:20.463688 master-0 kubenswrapper[6976]: I0318 08:52:20.463654 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-m862c"] Mar 18 08:52:20.473312 master-0 kubenswrapper[6976]: I0318 08:52:20.469703 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27"] Mar 18 08:52:20.484121 master-0 kubenswrapper[6976]: I0318 08:52:20.479864 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-2g6x9_0f6a7f55-84bd-4ea5-8248-4cb565904c3b/openshift-controller-manager-operator/0.log" Mar 18 08:52:20.484121 master-0 kubenswrapper[6976]: I0318 08:52:20.479936 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" event={"ID":"0f6a7f55-84bd-4ea5-8248-4cb565904c3b","Type":"ContainerStarted","Data":"daff3bf2a86ced2535729e03999611e01788a105f1dbac40f9f2b7b848897381"} Mar 18 08:52:20.488204 master-0 kubenswrapper[6976]: I0318 08:52:20.487736 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr"] Mar 18 08:52:20.500683 master-0 kubenswrapper[6976]: W0318 08:52:20.498750 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09269324_c908_474d_818f_5cd49406f1e2.slice/crio-fcb70fadbcfc61d48c1e2b4ec06918e00580889e40004adc7bcefac11baf1ceb WatchSource:0}: Error finding container fcb70fadbcfc61d48c1e2b4ec06918e00580889e40004adc7bcefac11baf1ceb: Status 404 returned error can't find the container with id fcb70fadbcfc61d48c1e2b4ec06918e00580889e40004adc7bcefac11baf1ceb Mar 18 08:52:20.507137 master-0 kubenswrapper[6976]: I0318 08:52:20.504671 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-j75sc_e86268c9-7a83-4ccb-979a-feff00cb4b3e/authentication-operator/1.log" Mar 18 08:52:20.507137 master-0 kubenswrapper[6976]: I0318 08:52:20.504796 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" event={"ID":"e86268c9-7a83-4ccb-979a-feff00cb4b3e","Type":"ContainerStarted","Data":"dbf5c9f276b21afd0b23c9f1822e3d4cef52722c47e61d9da08ce9a07bbe9c8e"} Mar 18 08:52:20.507775 master-0 kubenswrapper[6976]: I0318 08:52:20.507509 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" event={"ID":"7cac1300-44c1-4a7d-8d14-efa9702ad9df","Type":"ContainerStarted","Data":"9e7634be3a4cb755dbc0dd2889d5ffa704ff67f015983aeee93833b324c107db"} Mar 18 08:52:20.521180 master-0 kubenswrapper[6976]: I0318 08:52:20.521117 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" event={"ID":"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd","Type":"ContainerStarted","Data":"eca3cc2c6f8e3aeae9e8d1a0e8694ecad0c3c1ccd8351a14dff6726fb181ef90"} Mar 18 08:52:20.527440 master-0 kubenswrapper[6976]: I0318 08:52:20.526311 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"5d936f20024a27505a56975f151b7c3bf1da50cb1c5e184e3c0c3840e435fca8"} Mar 18 08:52:20.529242 master-0 kubenswrapper[6976]: I0318 08:52:20.529194 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" event={"ID":"59c421f2-2154-47eb-bf86-e5fe1b980d76","Type":"ContainerStarted","Data":"72ab5355df063971a8723ac73ffe167a74111ca83ef1f5957c8201e93af2ece6"} Mar 18 08:52:20.531316 master-0 kubenswrapper[6976]: I0318 08:52:20.530437 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" Mar 18 08:52:20.531837 master-0 kubenswrapper[6976]: I0318 08:52:20.531784 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" event={"ID":"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac","Type":"ContainerStarted","Data":"ea6882d5a36974530033e9c50aa841cb8b79a3300b3854f2b7d0678f67c4f1bf"} Mar 18 08:52:20.532840 master-0 kubenswrapper[6976]: I0318 08:52:20.532787 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" event={"ID":"f6833a48-fccb-42bd-ac90-29f08d5bf7e8","Type":"ContainerStarted","Data":"6f7fc65d624ce13d22d22ba96da2bcd01a27c00fbe5c72b2803f8ccbc5a1dae8"} Mar 18 08:52:20.535427 master-0 kubenswrapper[6976]: I0318 08:52:20.535384 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" event={"ID":"5f827195-f68d-4bd2-865b-a1f041a5c73e","Type":"ContainerStarted","Data":"2e9cd3740cbc4f0605f1eed6dd0188e129d08b1d79688165123b944381ceaaaa"} Mar 18 08:52:20.535815 master-0 kubenswrapper[6976]: I0318 08:52:20.535785 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" Mar 18 08:52:20.539469 master-0 kubenswrapper[6976]: I0318 08:52:20.537377 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" event={"ID":"bb6ef4c4-bff3-4559-8e42-582bbd668b7c","Type":"ContainerStarted","Data":"94a0ef05ccdfbfbab75ff3d50bbf9ce2c5410905e297dadef1700e3589016d40"} Mar 18 08:52:20.540402 master-0 kubenswrapper[6976]: I0318 08:52:20.540374 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-4cxfh_bf7a3329-a04c-4b58-9364-b907c00cbe08/ingress-operator/0.log" Mar 18 08:52:20.540436 master-0 kubenswrapper[6976]: I0318 08:52:20.540421 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" event={"ID":"bf7a3329-a04c-4b58-9364-b907c00cbe08","Type":"ContainerStarted","Data":"b5f7cf693149b169e2ca2431c906635fd55e0044ca6a526820ae0cf9a719f2b3"} Mar 18 08:52:20.542144 master-0 kubenswrapper[6976]: I0318 08:52:20.542112 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qnc62_4e919445-81d0-4663-8941-f596d8121305/snapshot-controller/0.log" Mar 18 08:52:20.543067 master-0 kubenswrapper[6976]: I0318 08:52:20.543032 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" event={"ID":"4e919445-81d0-4663-8941-f596d8121305","Type":"ContainerStarted","Data":"846d9dc4a6c1b4a6bf039195850d60f812737e3d5e44c652f1e1634888edfe9d"} Mar 18 08:52:20.546725 master-0 kubenswrapper[6976]: I0318 08:52:20.546250 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 08:52:20.548047 master-0 kubenswrapper[6976]: I0318 08:52:20.548035 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 08:52:20.573858 master-0 kubenswrapper[6976]: I0318 08:52:20.573821 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 18 08:52:20.612042 master-0 kubenswrapper[6976]: I0318 08:52:20.611939 6976 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ecb08ad-f7f1-466e-9b8a-b162137bfebd" path="/var/lib/kubelet/pods/9ecb08ad-f7f1-466e-9b8a-b162137bfebd/volumes" Mar 18 08:52:20.666583 master-0 kubenswrapper[6976]: I0318 08:52:20.664721 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-2xs9n"] Mar 18 08:52:20.731581 master-0 kubenswrapper[6976]: I0318 08:52:20.730641 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5"] Mar 18 08:52:21.549393 master-0 kubenswrapper[6976]: I0318 08:52:21.549340 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" event={"ID":"c00ee838-424f-482b-942f-08f0952a5ccd","Type":"ContainerStarted","Data":"30c4f18dcbcc9f18a43ee88da7092e594b453df2ae8b1fce02caf6e61a63685f"} Mar 18 08:52:21.551392 master-0 kubenswrapper[6976]: I0318 08:52:21.551334 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" event={"ID":"ca9d4694-8675-47c5-819f-89bba9dcdc0f","Type":"ContainerStarted","Data":"8aef2deed01150bfe4043851c63a0e6b97fd934c62137327d4f1c10f4beb1f04"} Mar 18 08:52:21.553297 master-0 kubenswrapper[6976]: I0318 08:52:21.553273 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2xs9n" event={"ID":"e48101ca-f356-45e3-93d7-4e17b8d8066c","Type":"ContainerStarted","Data":"64e6daddf9e1c75183bc383ad71913a134e81a48cb25bcfeb9ca74c12a1be908"} Mar 18 08:52:21.554428 master-0 kubenswrapper[6976]: I0318 08:52:21.554329 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" event={"ID":"09269324-c908-474d-818f-5cd49406f1e2","Type":"ContainerStarted","Data":"fcb70fadbcfc61d48c1e2b4ec06918e00580889e40004adc7bcefac11baf1ceb"} Mar 18 08:52:21.555866 master-0 kubenswrapper[6976]: I0318 08:52:21.555843 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" event={"ID":"2d0da6e3-3887-4361-8eae-e7447f9ff72c","Type":"ContainerStarted","Data":"7256ec69b2ffa66d04703f282868f73259c67a1650d997cb586ce7e6249d081e"} Mar 18 08:52:21.555923 master-0 kubenswrapper[6976]: I0318 08:52:21.555872 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" event={"ID":"2d0da6e3-3887-4361-8eae-e7447f9ff72c","Type":"ContainerStarted","Data":"1fc4aaf36f3d357358d477445a6e46751b37db5a1b5d446f108b4d2b190e035d"} Mar 18 08:52:22.684651 master-0 kubenswrapper[6976]: E0318 08:52:22.684596 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:52:22.684651 master-0 kubenswrapper[6976]: E0318 08:52:22.684638 6976 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 08:52:23.565337 master-0 kubenswrapper[6976]: I0318 08:52:23.565301 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:52:23.568300 master-0 kubenswrapper[6976]: I0318 08:52:23.568264 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" event={"ID":"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac","Type":"ContainerStarted","Data":"306577c0e1e759db97ccf74f1b2f6fc8127008236779e60afbca9657add3ab8b"} Mar 18 08:52:23.569835 master-0 kubenswrapper[6976]: I0318 08:52:23.569793 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" event={"ID":"09269324-c908-474d-818f-5cd49406f1e2","Type":"ContainerStarted","Data":"03ea5404c76bcfaac9702a0baf0be68736f15ed07345dbe520ef7a927239ec6a"} Mar 18 08:52:23.571174 master-0 kubenswrapper[6976]: I0318 08:52:23.571111 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" event={"ID":"ca9d4694-8675-47c5-819f-89bba9dcdc0f","Type":"ContainerStarted","Data":"c88fcd910d6e8db24ed27b15176e93cabbfee77fff73e20a53806a79c06e2fd5"} Mar 18 08:52:23.571378 master-0 kubenswrapper[6976]: I0318 08:52:23.571349 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:52:23.574746 master-0 kubenswrapper[6976]: I0318 08:52:23.574696 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2xs9n" event={"ID":"e48101ca-f356-45e3-93d7-4e17b8d8066c","Type":"ContainerStarted","Data":"63ed481633489f9ea7177bbe005f142fcfda8a60f56f2c952b505cfd2d3f092d"} Mar 18 08:52:23.579391 master-0 kubenswrapper[6976]: I0318 08:52:23.579362 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 08:52:23.783672 master-0 kubenswrapper[6976]: I0318 08:52:23.783618 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:52:24.582591 master-0 kubenswrapper[6976]: I0318 08:52:24.582527 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-2xs9n" event={"ID":"e48101ca-f356-45e3-93d7-4e17b8d8066c","Type":"ContainerStarted","Data":"546a19eb365010b9403d7f19c9a625e7b470a3b734b6de68430880ed4aef474c"} Mar 18 08:52:24.585290 master-0 kubenswrapper[6976]: I0318 08:52:24.585250 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" event={"ID":"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac","Type":"ContainerStarted","Data":"47ca95dc496d48ed1be652a66a1b8953731764f76ff2c9029488dd76430f4b5f"} Mar 18 08:52:25.574822 master-0 kubenswrapper[6976]: I0318 08:52:25.574757 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 18 08:52:25.606415 master-0 kubenswrapper[6976]: I0318 08:52:25.606295 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 18 08:52:26.783875 master-0 kubenswrapper[6976]: I0318 08:52:26.783797 6976 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 08:52:28.607846 master-0 kubenswrapper[6976]: I0318 08:52:28.607775 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" event={"ID":"c00ee838-424f-482b-942f-08f0952a5ccd","Type":"ContainerStarted","Data":"3e57af7f7022ed886b0cd05fdfa6226b081a49f453d76d9ddd3f74bb13195be8"} Mar 18 08:52:28.608321 master-0 kubenswrapper[6976]: I0318 08:52:28.608201 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:52:28.611323 master-0 kubenswrapper[6976]: I0318 08:52:28.611274 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" event={"ID":"f6833a48-fccb-42bd-ac90-29f08d5bf7e8","Type":"ContainerStarted","Data":"5eac672b1f5207af4bc94b1658fb729d78c0bfce1d1c038b1a0c4a18138222d8"} Mar 18 08:52:28.611687 master-0 kubenswrapper[6976]: I0318 08:52:28.611629 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:52:28.617863 master-0 kubenswrapper[6976]: I0318 08:52:28.617825 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 08:52:28.618068 master-0 kubenswrapper[6976]: I0318 08:52:28.618025 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" event={"ID":"2d0da6e3-3887-4361-8eae-e7447f9ff72c","Type":"ContainerStarted","Data":"eff8515f7824ab4366b3686f83336181d1ef884da04bbecf12f9008db8dde14c"} Mar 18 08:52:28.618765 master-0 kubenswrapper[6976]: I0318 08:52:28.618736 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:52:28.624214 master-0 kubenswrapper[6976]: I0318 08:52:28.624181 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 08:52:30.595517 master-0 kubenswrapper[6976]: I0318 08:52:30.595457 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 18 08:52:32.911211 master-0 kubenswrapper[6976]: E0318 08:52:32.911143 6976 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 18 08:52:33.386146 master-0 kubenswrapper[6976]: E0318 08:52:33.386083 6976 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe2682e4_cb63_4102_a83e_ef28023e273a.slice/crio-conmon-9386051748bed6f19f4cd27daf0e83a55db215d4d03fde7c071e527ef1bc497c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe2682e4_cb63_4102_a83e_ef28023e273a.slice/crio-9386051748bed6f19f4cd27daf0e83a55db215d4d03fde7c071e527ef1bc497c.scope\": RecentStats: unable to find data in memory cache]" Mar 18 08:52:33.790554 master-0 kubenswrapper[6976]: I0318 08:52:33.790468 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:52:33.797498 master-0 kubenswrapper[6976]: I0318 08:52:33.797474 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:52:33.927888 master-0 kubenswrapper[6976]: I0318 08:52:33.927697 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-cpbdr_0f9ba06c-7a6b-4f46-a747-80b0a0b58600/kube-scheduler-operator-container/1.log" Mar 18 08:52:33.928749 master-0 kubenswrapper[6976]: I0318 08:52:33.928680 6976 generic.go:334] "Generic (PLEG): container finished" podID="0f9ba06c-7a6b-4f46-a747-80b0a0b58600" containerID="2cfc620769df1869217ef2bafc4fb4d7ac92515611935bd9cfb8d767d6392d6b" exitCode=255 Mar 18 08:52:33.928866 master-0 kubenswrapper[6976]: I0318 08:52:33.928809 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" event={"ID":"0f9ba06c-7a6b-4f46-a747-80b0a0b58600","Type":"ContainerDied","Data":"2cfc620769df1869217ef2bafc4fb4d7ac92515611935bd9cfb8d767d6392d6b"} Mar 18 08:52:33.928949 master-0 kubenswrapper[6976]: I0318 08:52:33.928863 6976 scope.go:117] "RemoveContainer" containerID="e101758dad1868c5a7ecd290b1cfffd6e710b7c13cfdccb7b41fe00e23534e6d" Mar 18 08:52:33.929509 master-0 kubenswrapper[6976]: I0318 08:52:33.929462 6976 scope.go:117] "RemoveContainer" containerID="2cfc620769df1869217ef2bafc4fb4d7ac92515611935bd9cfb8d767d6392d6b" Mar 18 08:52:33.929812 master-0 kubenswrapper[6976]: E0318 08:52:33.929748 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler-operator-container pod=openshift-kube-scheduler-operator-dddff6458-cpbdr_openshift-kube-scheduler-operator(0f9ba06c-7a6b-4f46-a747-80b0a0b58600)\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" podUID="0f9ba06c-7a6b-4f46-a747-80b0a0b58600" Mar 18 08:52:33.932804 master-0 kubenswrapper[6976]: I0318 08:52:33.932683 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-xlfrc_1df9560e-21f0-44fe-bb51-4bc0fde4a3ac/kube-controller-manager-operator/1.log" Mar 18 08:52:33.933720 master-0 kubenswrapper[6976]: I0318 08:52:33.933653 6976 generic.go:334] "Generic (PLEG): container finished" podID="1df9560e-21f0-44fe-bb51-4bc0fde4a3ac" containerID="33e0c0fa477ce3a082850936be336ae3c69e7dc9385f227bc893cfb947394012" exitCode=255 Mar 18 08:52:33.933881 master-0 kubenswrapper[6976]: I0318 08:52:33.933758 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" event={"ID":"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac","Type":"ContainerDied","Data":"33e0c0fa477ce3a082850936be336ae3c69e7dc9385f227bc893cfb947394012"} Mar 18 08:52:33.936027 master-0 kubenswrapper[6976]: I0318 08:52:33.935957 6976 scope.go:117] "RemoveContainer" containerID="33e0c0fa477ce3a082850936be336ae3c69e7dc9385f227bc893cfb947394012" Mar 18 08:52:33.936876 master-0 kubenswrapper[6976]: I0318 08:52:33.936811 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-pp4r9_65cff83a-8d8f-4e4f-96ef-99941c29ba53/kube-apiserver-operator/1.log" Mar 18 08:52:33.937042 master-0 kubenswrapper[6976]: E0318 08:52:33.936839 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-ff989d6cc-xlfrc_openshift-kube-controller-manager-operator(1df9560e-21f0-44fe-bb51-4bc0fde4a3ac)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" podUID="1df9560e-21f0-44fe-bb51-4bc0fde4a3ac" Mar 18 08:52:33.937793 master-0 kubenswrapper[6976]: I0318 08:52:33.937507 6976 generic.go:334] "Generic (PLEG): container finished" podID="65cff83a-8d8f-4e4f-96ef-99941c29ba53" containerID="2e60113a55bc3fdf5ffd475c0a2b9ffa85c87d1620b1886f6cf55bbb6b1809ed" exitCode=255 Mar 18 08:52:33.937793 master-0 kubenswrapper[6976]: I0318 08:52:33.937553 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" event={"ID":"65cff83a-8d8f-4e4f-96ef-99941c29ba53","Type":"ContainerDied","Data":"2e60113a55bc3fdf5ffd475c0a2b9ffa85c87d1620b1886f6cf55bbb6b1809ed"} Mar 18 08:52:33.938517 master-0 kubenswrapper[6976]: I0318 08:52:33.938407 6976 scope.go:117] "RemoveContainer" containerID="2e60113a55bc3fdf5ffd475c0a2b9ffa85c87d1620b1886f6cf55bbb6b1809ed" Mar 18 08:52:33.939070 master-0 kubenswrapper[6976]: E0318 08:52:33.938965 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-8b68b9d9b-pp4r9_openshift-kube-apiserver-operator(65cff83a-8d8f-4e4f-96ef-99941c29ba53)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" podUID="65cff83a-8d8f-4e4f-96ef-99941c29ba53" Mar 18 08:52:33.940401 master-0 kubenswrapper[6976]: I0318 08:52:33.940346 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-6rtpx_8b779ce3-07c4-45ca-b1ca-750c95ed3d0b/network-operator/1.log" Mar 18 08:52:33.941292 master-0 kubenswrapper[6976]: I0318 08:52:33.941241 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-6rtpx_8b779ce3-07c4-45ca-b1ca-750c95ed3d0b/network-operator/0.log" Mar 18 08:52:33.941421 master-0 kubenswrapper[6976]: I0318 08:52:33.941309 6976 generic.go:334] "Generic (PLEG): container finished" podID="8b779ce3-07c4-45ca-b1ca-750c95ed3d0b" containerID="88991e3930254d3b149944c85afc57bb3f7cc44aa37269c1606831ad4c12dd71" exitCode=255 Mar 18 08:52:33.941421 master-0 kubenswrapper[6976]: I0318 08:52:33.941391 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" event={"ID":"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b","Type":"ContainerDied","Data":"88991e3930254d3b149944c85afc57bb3f7cc44aa37269c1606831ad4c12dd71"} Mar 18 08:52:33.942090 master-0 kubenswrapper[6976]: I0318 08:52:33.942017 6976 scope.go:117] "RemoveContainer" containerID="88991e3930254d3b149944c85afc57bb3f7cc44aa37269c1606831ad4c12dd71" Mar 18 08:52:33.942423 master-0 kubenswrapper[6976]: E0318 08:52:33.942355 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=network-operator pod=network-operator-7bd846bfc4-6rtpx_openshift-network-operator(8b779ce3-07c4-45ca-b1ca-750c95ed3d0b)\"" pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" podUID="8b779ce3-07c4-45ca-b1ca-750c95ed3d0b" Mar 18 08:52:33.944198 master-0 kubenswrapper[6976]: I0318 08:52:33.944140 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl_be2682e4-cb63-4102-a83e-ef28023e273a/kube-storage-version-migrator-operator/1.log" Mar 18 08:52:33.944777 master-0 kubenswrapper[6976]: I0318 08:52:33.944724 6976 generic.go:334] "Generic (PLEG): container finished" podID="be2682e4-cb63-4102-a83e-ef28023e273a" containerID="9386051748bed6f19f4cd27daf0e83a55db215d4d03fde7c071e527ef1bc497c" exitCode=255 Mar 18 08:52:33.944884 master-0 kubenswrapper[6976]: I0318 08:52:33.944814 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" event={"ID":"be2682e4-cb63-4102-a83e-ef28023e273a","Type":"ContainerDied","Data":"9386051748bed6f19f4cd27daf0e83a55db215d4d03fde7c071e527ef1bc497c"} Mar 18 08:52:33.945273 master-0 kubenswrapper[6976]: I0318 08:52:33.945225 6976 scope.go:117] "RemoveContainer" containerID="9386051748bed6f19f4cd27daf0e83a55db215d4d03fde7c071e527ef1bc497c" Mar 18 08:52:33.945588 master-0 kubenswrapper[6976]: E0318 08:52:33.945496 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-storage-version-migrator-operator pod=kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl_openshift-kube-storage-version-migrator-operator(be2682e4-cb63-4102-a83e-ef28023e273a)\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" podUID="be2682e4-cb63-4102-a83e-ef28023e273a" Mar 18 08:52:33.949184 master-0 kubenswrapper[6976]: I0318 08:52:33.949122 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-m8p9p_81eefe1b-f683-4740-8fb0-0a5050f9b4a4/openshift-apiserver-operator/1.log" Mar 18 08:52:33.950068 master-0 kubenswrapper[6976]: I0318 08:52:33.950023 6976 generic.go:334] "Generic (PLEG): container finished" podID="81eefe1b-f683-4740-8fb0-0a5050f9b4a4" containerID="f271faf0d7c55de8efcccdde7688825092dfb7f1d00e1599288466a5a990a816" exitCode=255 Mar 18 08:52:33.950355 master-0 kubenswrapper[6976]: I0318 08:52:33.950309 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" event={"ID":"81eefe1b-f683-4740-8fb0-0a5050f9b4a4","Type":"ContainerDied","Data":"f271faf0d7c55de8efcccdde7688825092dfb7f1d00e1599288466a5a990a816"} Mar 18 08:52:33.953440 master-0 kubenswrapper[6976]: I0318 08:52:33.953084 6976 scope.go:117] "RemoveContainer" containerID="f271faf0d7c55de8efcccdde7688825092dfb7f1d00e1599288466a5a990a816" Mar 18 08:52:33.953440 master-0 kubenswrapper[6976]: E0318 08:52:33.953408 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver-operator pod=openshift-apiserver-operator-d65958b8-m8p9p_openshift-apiserver-operator(81eefe1b-f683-4740-8fb0-0a5050f9b4a4)\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" podUID="81eefe1b-f683-4740-8fb0-0a5050f9b4a4" Mar 18 08:52:33.959533 master-0 kubenswrapper[6976]: I0318 08:52:33.959473 6976 scope.go:117] "RemoveContainer" containerID="cdf9805777db651916bc0fbdb03aeca74e0291990d89a5792cd9c2058bcbad82" Mar 18 08:52:33.985111 master-0 kubenswrapper[6976]: I0318 08:52:33.985050 6976 scope.go:117] "RemoveContainer" containerID="e7040e73164a56f089f0acc8e8f60bd6ac708b6b6770784a34fbb303688099ef" Mar 18 08:52:34.009767 master-0 kubenswrapper[6976]: I0318 08:52:34.009707 6976 scope.go:117] "RemoveContainer" containerID="fd295b6b7843cd03ce43cecd7dcd871e030a3bf9af1473694567c5a5799d4c76" Mar 18 08:52:34.030537 master-0 kubenswrapper[6976]: I0318 08:52:34.030480 6976 scope.go:117] "RemoveContainer" containerID="8ff399eba975fe3e4ac2c3d81b3e52845b1835ad72d3a17e7e74d5e7eca9397d" Mar 18 08:52:34.053283 master-0 kubenswrapper[6976]: I0318 08:52:34.053236 6976 scope.go:117] "RemoveContainer" containerID="b07a3a34e91709be9071f795c0e0650539cb11f6bc35fb3bec049b4bc3051c6c" Mar 18 08:52:34.958818 master-0 kubenswrapper[6976]: I0318 08:52:34.958726 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-cpbdr_0f9ba06c-7a6b-4f46-a747-80b0a0b58600/kube-scheduler-operator-container/1.log" Mar 18 08:52:34.961082 master-0 kubenswrapper[6976]: I0318 08:52:34.961035 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-xlfrc_1df9560e-21f0-44fe-bb51-4bc0fde4a3ac/kube-controller-manager-operator/1.log" Mar 18 08:52:34.963986 master-0 kubenswrapper[6976]: I0318 08:52:34.963917 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-pp4r9_65cff83a-8d8f-4e4f-96ef-99941c29ba53/kube-apiserver-operator/1.log" Mar 18 08:52:34.966320 master-0 kubenswrapper[6976]: I0318 08:52:34.966269 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-6rtpx_8b779ce3-07c4-45ca-b1ca-750c95ed3d0b/network-operator/1.log" Mar 18 08:52:34.969149 master-0 kubenswrapper[6976]: I0318 08:52:34.969108 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl_be2682e4-cb63-4102-a83e-ef28023e273a/kube-storage-version-migrator-operator/1.log" Mar 18 08:52:34.971217 master-0 kubenswrapper[6976]: I0318 08:52:34.971175 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-m8p9p_81eefe1b-f683-4740-8fb0-0a5050f9b4a4/openshift-apiserver-operator/1.log" Mar 18 08:52:36.254999 master-0 kubenswrapper[6976]: E0318 08:52:36.254917 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Mar 18 08:52:36.668181 master-0 kubenswrapper[6976]: E0318 08:52:36.668009 6976 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" is forbidden: the server was unable to return a response in the time allotted, but may still be processing the request (get limitranges)" pod="openshift-etcd/etcd-master-0" Mar 18 08:52:44.598250 master-0 kubenswrapper[6976]: I0318 08:52:44.598153 6976 scope.go:117] "RemoveContainer" containerID="9386051748bed6f19f4cd27daf0e83a55db215d4d03fde7c071e527ef1bc497c" Mar 18 08:52:45.035938 master-0 kubenswrapper[6976]: I0318 08:52:45.035862 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl_be2682e4-cb63-4102-a83e-ef28023e273a/kube-storage-version-migrator-operator/1.log" Mar 18 08:52:45.036230 master-0 kubenswrapper[6976]: I0318 08:52:45.035947 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" event={"ID":"be2682e4-cb63-4102-a83e-ef28023e273a","Type":"ContainerStarted","Data":"e0c10cb728f84836bdf3fdacd9f7ace9b139b03a5e08557846d8eceff033db2d"} Mar 18 08:52:45.598783 master-0 kubenswrapper[6976]: I0318 08:52:45.598685 6976 scope.go:117] "RemoveContainer" containerID="2e60113a55bc3fdf5ffd475c0a2b9ffa85c87d1620b1886f6cf55bbb6b1809ed" Mar 18 08:52:45.599793 master-0 kubenswrapper[6976]: I0318 08:52:45.598840 6976 scope.go:117] "RemoveContainer" containerID="2cfc620769df1869217ef2bafc4fb4d7ac92515611935bd9cfb8d767d6392d6b" Mar 18 08:52:46.041553 master-0 kubenswrapper[6976]: I0318 08:52:46.041513 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-cpbdr_0f9ba06c-7a6b-4f46-a747-80b0a0b58600/kube-scheduler-operator-container/1.log" Mar 18 08:52:46.041989 master-0 kubenswrapper[6976]: I0318 08:52:46.041951 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" event={"ID":"0f9ba06c-7a6b-4f46-a747-80b0a0b58600","Type":"ContainerStarted","Data":"d7608fe34e378740d75e5700927864c73724bd06defaa4417e4cf493ed7fa031"} Mar 18 08:52:46.043339 master-0 kubenswrapper[6976]: I0318 08:52:46.043294 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-pp4r9_65cff83a-8d8f-4e4f-96ef-99941c29ba53/kube-apiserver-operator/1.log" Mar 18 08:52:46.043453 master-0 kubenswrapper[6976]: I0318 08:52:46.043348 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" event={"ID":"65cff83a-8d8f-4e4f-96ef-99941c29ba53","Type":"ContainerStarted","Data":"26f8c4214ea54fb5e2ff7d9fa93e91ddc6301a4725fdb41f15e4fe0ec185b735"} Mar 18 08:52:47.598643 master-0 kubenswrapper[6976]: I0318 08:52:47.598591 6976 scope.go:117] "RemoveContainer" containerID="33e0c0fa477ce3a082850936be336ae3c69e7dc9385f227bc893cfb947394012" Mar 18 08:52:47.599704 master-0 kubenswrapper[6976]: I0318 08:52:47.599050 6976 scope.go:117] "RemoveContainer" containerID="f271faf0d7c55de8efcccdde7688825092dfb7f1d00e1599288466a5a990a816" Mar 18 08:52:48.059519 master-0 kubenswrapper[6976]: I0318 08:52:48.059473 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-m8p9p_81eefe1b-f683-4740-8fb0-0a5050f9b4a4/openshift-apiserver-operator/1.log" Mar 18 08:52:48.059744 master-0 kubenswrapper[6976]: I0318 08:52:48.059628 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" event={"ID":"81eefe1b-f683-4740-8fb0-0a5050f9b4a4","Type":"ContainerStarted","Data":"f08aeb5da7826787c1ec1cdeffdafe2940ff67da689328c24f91d38398d4c82f"} Mar 18 08:52:48.062533 master-0 kubenswrapper[6976]: I0318 08:52:48.062507 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-xlfrc_1df9560e-21f0-44fe-bb51-4bc0fde4a3ac/kube-controller-manager-operator/1.log" Mar 18 08:52:48.062635 master-0 kubenswrapper[6976]: I0318 08:52:48.062604 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" event={"ID":"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac","Type":"ContainerStarted","Data":"397f1b865b872e0766a977139b9bec63c83da8c488f3a67f20f61eeee5441847"} Mar 18 08:52:48.598961 master-0 kubenswrapper[6976]: I0318 08:52:48.598890 6976 scope.go:117] "RemoveContainer" containerID="88991e3930254d3b149944c85afc57bb3f7cc44aa37269c1606831ad4c12dd71" Mar 18 08:52:49.071838 master-0 kubenswrapper[6976]: I0318 08:52:49.071770 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-6rtpx_8b779ce3-07c4-45ca-b1ca-750c95ed3d0b/network-operator/1.log" Mar 18 08:52:49.071838 master-0 kubenswrapper[6976]: I0318 08:52:49.071828 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" event={"ID":"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b","Type":"ContainerStarted","Data":"c4b2b5cdda865559c55ffa8912182991bd1e27c68c083b72c000a6f2a9e703dc"} Mar 18 08:52:58.215274 master-0 kubenswrapper[6976]: I0318 08:52:58.215153 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv"] Mar 18 08:52:58.216547 master-0 kubenswrapper[6976]: E0318 08:52:58.215587 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b830ff-8938-4f21-8977-c29a19c85afb" containerName="installer" Mar 18 08:52:58.216547 master-0 kubenswrapper[6976]: I0318 08:52:58.215622 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b830ff-8938-4f21-8977-c29a19c85afb" containerName="installer" Mar 18 08:52:58.216547 master-0 kubenswrapper[6976]: E0318 08:52:58.215647 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b75d3625-4131-465d-a8e2-4c42588c7630" containerName="installer" Mar 18 08:52:58.216547 master-0 kubenswrapper[6976]: I0318 08:52:58.215660 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="b75d3625-4131-465d-a8e2-4c42588c7630" containerName="installer" Mar 18 08:52:58.216547 master-0 kubenswrapper[6976]: E0318 08:52:58.215677 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c393a935-1821-4742-b1bb-0ee52ada5434" containerName="installer" Mar 18 08:52:58.216547 master-0 kubenswrapper[6976]: I0318 08:52:58.215692 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="c393a935-1821-4742-b1bb-0ee52ada5434" containerName="installer" Mar 18 08:52:58.216547 master-0 kubenswrapper[6976]: E0318 08:52:58.215716 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ecb08ad-f7f1-466e-9b8a-b162137bfebd" containerName="installer" Mar 18 08:52:58.216547 master-0 kubenswrapper[6976]: I0318 08:52:58.215728 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ecb08ad-f7f1-466e-9b8a-b162137bfebd" containerName="installer" Mar 18 08:52:58.216547 master-0 kubenswrapper[6976]: E0318 08:52:58.215747 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3253d87f-ae48-42cf-950f-f508a9b82d0d" containerName="installer" Mar 18 08:52:58.216547 master-0 kubenswrapper[6976]: I0318 08:52:58.215759 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="3253d87f-ae48-42cf-950f-f508a9b82d0d" containerName="installer" Mar 18 08:52:58.216547 master-0 kubenswrapper[6976]: I0318 08:52:58.215912 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="3253d87f-ae48-42cf-950f-f508a9b82d0d" containerName="installer" Mar 18 08:52:58.216547 master-0 kubenswrapper[6976]: I0318 08:52:58.215930 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="c393a935-1821-4742-b1bb-0ee52ada5434" containerName="installer" Mar 18 08:52:58.216547 master-0 kubenswrapper[6976]: I0318 08:52:58.215948 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ecb08ad-f7f1-466e-9b8a-b162137bfebd" containerName="installer" Mar 18 08:52:58.216547 master-0 kubenswrapper[6976]: I0318 08:52:58.215976 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b830ff-8938-4f21-8977-c29a19c85afb" containerName="installer" Mar 18 08:52:58.216547 master-0 kubenswrapper[6976]: I0318 08:52:58.215993 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="b75d3625-4131-465d-a8e2-4c42588c7630" containerName="installer" Mar 18 08:52:58.218046 master-0 kubenswrapper[6976]: I0318 08:52:58.216863 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" Mar 18 08:52:58.226068 master-0 kubenswrapper[6976]: I0318 08:52:58.226006 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 18 08:52:58.227747 master-0 kubenswrapper[6976]: I0318 08:52:58.227664 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d"] Mar 18 08:52:58.229365 master-0 kubenswrapper[6976]: I0318 08:52:58.229303 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 08:52:58.233941 master-0 kubenswrapper[6976]: I0318 08:52:58.233862 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-jqmlx" Mar 18 08:52:58.233941 master-0 kubenswrapper[6976]: I0318 08:52:58.233918 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-gx9ws" Mar 18 08:52:58.234231 master-0 kubenswrapper[6976]: I0318 08:52:58.233935 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 08:52:58.234231 master-0 kubenswrapper[6976]: I0318 08:52:58.233886 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 08:52:58.234432 master-0 kubenswrapper[6976]: I0318 08:52:58.234343 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 18 08:52:58.238171 master-0 kubenswrapper[6976]: I0318 08:52:58.238111 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 08:52:58.238822 master-0 kubenswrapper[6976]: I0318 08:52:58.238760 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv"] Mar 18 08:52:58.240963 master-0 kubenswrapper[6976]: I0318 08:52:58.240883 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" Mar 18 08:52:58.242430 master-0 kubenswrapper[6976]: I0318 08:52:58.242383 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 08:52:58.242559 master-0 kubenswrapper[6976]: I0318 08:52:58.242492 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 08:52:58.242651 master-0 kubenswrapper[6976]: I0318 08:52:58.242510 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 08:52:58.246605 master-0 kubenswrapper[6976]: I0318 08:52:58.245546 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h"] Mar 18 08:52:58.246823 master-0 kubenswrapper[6976]: I0318 08:52:58.246745 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 08:52:58.255502 master-0 kubenswrapper[6976]: I0318 08:52:58.253159 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2gpbt"] Mar 18 08:52:58.255502 master-0 kubenswrapper[6976]: I0318 08:52:58.254398 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nfdcz"] Mar 18 08:52:58.255502 master-0 kubenswrapper[6976]: I0318 08:52:58.255307 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 08:52:58.255502 master-0 kubenswrapper[6976]: I0318 08:52:58.255450 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nfdcz" Mar 18 08:52:58.255936 master-0 kubenswrapper[6976]: I0318 08:52:58.255711 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 08:52:58.256007 master-0 kubenswrapper[6976]: I0318 08:52:58.255978 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 08:52:58.256074 master-0 kubenswrapper[6976]: I0318 08:52:58.256010 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-hbb9q" Mar 18 08:52:58.257432 master-0 kubenswrapper[6976]: I0318 08:52:58.256217 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 08:52:58.257432 master-0 kubenswrapper[6976]: I0318 08:52:58.256257 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 18 08:52:58.257432 master-0 kubenswrapper[6976]: I0318 08:52:58.256214 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 08:52:58.257965 master-0 kubenswrapper[6976]: I0318 08:52:58.257681 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-d6jf5" Mar 18 08:52:58.257965 master-0 kubenswrapper[6976]: I0318 08:52:58.257943 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 18 08:52:58.258140 master-0 kubenswrapper[6976]: I0318 08:52:58.258098 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-lgw5q" Mar 18 08:52:58.258201 master-0 kubenswrapper[6976]: I0318 08:52:58.258154 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.263704 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5x8lj"] Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.265127 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.265840 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fdb52116-9c55-4464-99c8-fc2e4559996b-images\") pod \"machine-api-operator-6fbb6cf6f9-n4t2h\" (UID: \"fdb52116-9c55-4464-99c8-fc2e4559996b\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.265887 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fdb52116-9c55-4464-99c8-fc2e4559996b-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-n4t2h\" (UID: \"fdb52116-9c55-4464-99c8-fc2e4559996b\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.265910 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5fd4cc-959e-4878-82e9-b0f90dba6553-catalog-content\") pod \"redhat-marketplace-2gpbt\" (UID: \"bf5fd4cc-959e-4878-82e9-b0f90dba6553\") " pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.265932 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2fcd92f-0a58-4c87-8213-715453486aca-utilities\") pod \"certified-operators-5x8lj\" (UID: \"f2fcd92f-0a58-4c87-8213-715453486aca\") " pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.265953 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj95l\" (UniqueName: \"kubernetes.io/projected/eb8f3615-9e89-4b51-87a2-7d168c81adf3-kube-api-access-mj95l\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.265974 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kckrf\" (UniqueName: \"kubernetes.io/projected/01243eca-2966-40a3-9eeb-fa3edc917717-kube-api-access-kckrf\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv\" (UID: \"01243eca-2966-40a3-9eeb-fa3edc917717\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.265999 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb8f3615-9e89-4b51-87a2-7d168c81adf3-config\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.266026 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/01243eca-2966-40a3-9eeb-fa3edc917717-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv\" (UID: \"01243eca-2966-40a3-9eeb-fa3edc917717\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.266049 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdb52116-9c55-4464-99c8-fc2e4559996b-config\") pod \"machine-api-operator-6fbb6cf6f9-n4t2h\" (UID: \"fdb52116-9c55-4464-99c8-fc2e4559996b\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.266075 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzrxv\" (UniqueName: \"kubernetes.io/projected/fdb52116-9c55-4464-99c8-fc2e4559996b-kube-api-access-xzrxv\") pod \"machine-api-operator-6fbb6cf6f9-n4t2h\" (UID: \"fdb52116-9c55-4464-99c8-fc2e4559996b\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.266099 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k22wv\" (UniqueName: \"kubernetes.io/projected/e88b021c-c810-4a68-aa48-d8666b52330e-kube-api-access-k22wv\") pod \"cluster-autoscaler-operator-866dc4744-tx2pv\" (UID: \"e88b021c-c810-4a68-aa48-d8666b52330e\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.266121 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb8f3615-9e89-4b51-87a2-7d168c81adf3-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.266141 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eb8f3615-9e89-4b51-87a2-7d168c81adf3-cert\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.266165 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e88b021c-c810-4a68-aa48-d8666b52330e-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-tx2pv\" (UID: \"e88b021c-c810-4a68-aa48-d8666b52330e\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.266230 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/01243eca-2966-40a3-9eeb-fa3edc917717-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv\" (UID: \"01243eca-2966-40a3-9eeb-fa3edc917717\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.266283 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4jq4\" (UniqueName: \"kubernetes.io/projected/bf5fd4cc-959e-4878-82e9-b0f90dba6553-kube-api-access-r4jq4\") pod \"redhat-marketplace-2gpbt\" (UID: \"bf5fd4cc-959e-4878-82e9-b0f90dba6553\") " pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.266349 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eb8f3615-9e89-4b51-87a2-7d168c81adf3-images\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.266433 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e88b021c-c810-4a68-aa48-d8666b52330e-cert\") pod \"cluster-autoscaler-operator-866dc4744-tx2pv\" (UID: \"e88b021c-c810-4a68-aa48-d8666b52330e\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.266474 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c322813-b574-4b46-b760-208ccecd01a5-catalog-content\") pod \"community-operators-nfdcz\" (UID: \"1c322813-b574-4b46-b760-208ccecd01a5\") " pod="openshift-marketplace/community-operators-nfdcz" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.266599 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/01243eca-2966-40a3-9eeb-fa3edc917717-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv\" (UID: \"01243eca-2966-40a3-9eeb-fa3edc917717\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.266655 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c322813-b574-4b46-b760-208ccecd01a5-utilities\") pod \"community-operators-nfdcz\" (UID: \"1c322813-b574-4b46-b760-208ccecd01a5\") " pod="openshift-marketplace/community-operators-nfdcz" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.266701 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fbs4\" (UniqueName: \"kubernetes.io/projected/1c322813-b574-4b46-b760-208ccecd01a5-kube-api-access-9fbs4\") pod \"community-operators-nfdcz\" (UID: \"1c322813-b574-4b46-b760-208ccecd01a5\") " pod="openshift-marketplace/community-operators-nfdcz" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.266730 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2fcd92f-0a58-4c87-8213-715453486aca-catalog-content\") pod \"certified-operators-5x8lj\" (UID: \"f2fcd92f-0a58-4c87-8213-715453486aca\") " pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.266763 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/01243eca-2966-40a3-9eeb-fa3edc917717-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv\" (UID: \"01243eca-2966-40a3-9eeb-fa3edc917717\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.266799 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwnvl\" (UniqueName: \"kubernetes.io/projected/f2fcd92f-0a58-4c87-8213-715453486aca-kube-api-access-zwnvl\") pod \"certified-operators-5x8lj\" (UID: \"f2fcd92f-0a58-4c87-8213-715453486aca\") " pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 08:52:58.267044 master-0 kubenswrapper[6976]: I0318 08:52:58.266840 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5fd4cc-959e-4878-82e9-b0f90dba6553-utilities\") pod \"redhat-marketplace-2gpbt\" (UID: \"bf5fd4cc-959e-4878-82e9-b0f90dba6553\") " pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 08:52:58.274595 master-0 kubenswrapper[6976]: I0318 08:52:58.269271 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4r6jd"] Mar 18 08:52:58.274595 master-0 kubenswrapper[6976]: I0318 08:52:58.270851 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-mn6mb" Mar 18 08:52:58.274595 master-0 kubenswrapper[6976]: I0318 08:52:58.271158 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 18 08:52:58.274595 master-0 kubenswrapper[6976]: I0318 08:52:58.274158 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 08:52:58.282188 master-0 kubenswrapper[6976]: I0318 08:52:58.275999 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-zlc9x" Mar 18 08:52:58.282188 master-0 kubenswrapper[6976]: I0318 08:52:58.276209 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-l7k6v" Mar 18 08:52:58.282188 master-0 kubenswrapper[6976]: I0318 08:52:58.280324 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw"] Mar 18 08:52:58.282188 master-0 kubenswrapper[6976]: I0318 08:52:58.281355 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" Mar 18 08:52:58.286720 master-0 kubenswrapper[6976]: I0318 08:52:58.284158 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-m9g5m" Mar 18 08:52:58.286720 master-0 kubenswrapper[6976]: I0318 08:52:58.284224 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 18 08:52:58.286720 master-0 kubenswrapper[6976]: I0318 08:52:58.284489 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 18 08:52:58.286720 master-0 kubenswrapper[6976]: I0318 08:52:58.285439 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 18 08:52:58.286720 master-0 kubenswrapper[6976]: I0318 08:52:58.285516 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 18 08:52:58.290046 master-0 kubenswrapper[6976]: I0318 08:52:58.288016 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-9f7lz"] Mar 18 08:52:58.290046 master-0 kubenswrapper[6976]: I0318 08:52:58.288743 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-9f7lz" Mar 18 08:52:58.290867 master-0 kubenswrapper[6976]: I0318 08:52:58.290630 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678"] Mar 18 08:52:58.291618 master-0 kubenswrapper[6976]: I0318 08:52:58.291501 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" Mar 18 08:52:58.294551 master-0 kubenswrapper[6976]: I0318 08:52:58.293124 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-s98kp"] Mar 18 08:52:58.294551 master-0 kubenswrapper[6976]: I0318 08:52:58.293631 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 08:52:58.294551 master-0 kubenswrapper[6976]: I0318 08:52:58.294038 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 08:52:58.294551 master-0 kubenswrapper[6976]: I0318 08:52:58.294133 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-s98kp" Mar 18 08:52:58.294551 master-0 kubenswrapper[6976]: I0318 08:52:58.294336 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 08:52:58.294551 master-0 kubenswrapper[6976]: I0318 08:52:58.294503 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-fhncm" Mar 18 08:52:58.295316 master-0 kubenswrapper[6976]: I0318 08:52:58.295300 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 18 08:52:58.295498 master-0 kubenswrapper[6976]: I0318 08:52:58.295484 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-vvwvf" Mar 18 08:52:58.297638 master-0 kubenswrapper[6976]: I0318 08:52:58.295643 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 08:52:58.297638 master-0 kubenswrapper[6976]: I0318 08:52:58.296714 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 18 08:52:58.303710 master-0 kubenswrapper[6976]: I0318 08:52:58.303665 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xjbb5"] Mar 18 08:52:58.304500 master-0 kubenswrapper[6976]: I0318 08:52:58.304481 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xjbb5" Mar 18 08:52:58.308586 master-0 kubenswrapper[6976]: I0318 08:52:58.305080 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-89rtc"] Mar 18 08:52:58.308586 master-0 kubenswrapper[6976]: I0318 08:52:58.305503 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 08:52:58.308586 master-0 kubenswrapper[6976]: I0318 08:52:58.306505 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-kldf7" Mar 18 08:52:58.308586 master-0 kubenswrapper[6976]: I0318 08:52:58.306744 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-khzbd" Mar 18 08:52:58.308586 master-0 kubenswrapper[6976]: I0318 08:52:58.306936 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 18 08:52:58.308586 master-0 kubenswrapper[6976]: I0318 08:52:58.307137 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 18 08:52:58.308586 master-0 kubenswrapper[6976]: I0318 08:52:58.307272 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 18 08:52:58.308586 master-0 kubenswrapper[6976]: I0318 08:52:58.307338 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 18 08:52:58.308586 master-0 kubenswrapper[6976]: I0318 08:52:58.307277 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 08:52:58.308586 master-0 kubenswrapper[6976]: I0318 08:52:58.307973 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b"] Mar 18 08:52:58.308975 master-0 kubenswrapper[6976]: I0318 08:52:58.308930 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 08:52:58.316604 master-0 kubenswrapper[6976]: I0318 08:52:58.316506 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 08:52:58.317181 master-0 kubenswrapper[6976]: I0318 08:52:58.317161 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-kvnts" Mar 18 08:52:58.317362 master-0 kubenswrapper[6976]: I0318 08:52:58.317346 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 18 08:52:58.319552 master-0 kubenswrapper[6976]: I0318 08:52:58.319524 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 18 08:52:58.319840 master-0 kubenswrapper[6976]: I0318 08:52:58.319803 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 18 08:52:58.319944 master-0 kubenswrapper[6976]: I0318 08:52:58.319920 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 18 08:52:58.320154 master-0 kubenswrapper[6976]: I0318 08:52:58.320122 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 08:52:58.320213 master-0 kubenswrapper[6976]: I0318 08:52:58.320196 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-pws99" Mar 18 08:52:58.320286 master-0 kubenswrapper[6976]: I0318 08:52:58.320275 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 08:52:58.320338 master-0 kubenswrapper[6976]: I0318 08:52:58.320323 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 08:52:58.320366 master-0 kubenswrapper[6976]: I0318 08:52:58.320357 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 08:52:58.330241 master-0 kubenswrapper[6976]: I0318 08:52:58.330203 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv"] Mar 18 08:52:58.334681 master-0 kubenswrapper[6976]: I0318 08:52:58.334647 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4r6jd"] Mar 18 08:52:58.338057 master-0 kubenswrapper[6976]: I0318 08:52:58.337990 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2gpbt"] Mar 18 08:52:58.340779 master-0 kubenswrapper[6976]: I0318 08:52:58.340256 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d"] Mar 18 08:52:58.342386 master-0 kubenswrapper[6976]: I0318 08:52:58.342243 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r"] Mar 18 08:52:58.342920 master-0 kubenswrapper[6976]: I0318 08:52:58.342767 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 08:52:58.344657 master-0 kubenswrapper[6976]: I0318 08:52:58.344621 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-nxx2s" Mar 18 08:52:58.344856 master-0 kubenswrapper[6976]: I0318 08:52:58.344836 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 18 08:52:58.345548 master-0 kubenswrapper[6976]: I0318 08:52:58.345238 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nfdcz"] Mar 18 08:52:58.347553 master-0 kubenswrapper[6976]: I0318 08:52:58.347531 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5x8lj"] Mar 18 08:52:58.348922 master-0 kubenswrapper[6976]: I0318 08:52:58.348894 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h"] Mar 18 08:52:58.350591 master-0 kubenswrapper[6976]: I0318 08:52:58.350542 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw"] Mar 18 08:52:58.353678 master-0 kubenswrapper[6976]: I0318 08:52:58.353653 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-9f7lz"] Mar 18 08:52:58.365549 master-0 kubenswrapper[6976]: I0318 08:52:58.365492 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-s98kp"] Mar 18 08:52:58.369218 master-0 kubenswrapper[6976]: I0318 08:52:58.369178 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b"] Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.372829 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eb8f3615-9e89-4b51-87a2-7d168c81adf3-cert\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.373148 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csfl2\" (UniqueName: \"kubernetes.io/projected/2a864188-ada6-4ec2-bf9f-72dab210f0ce-kube-api-access-csfl2\") pod \"cluster-storage-operator-7d87854d6-9f7lz\" (UID: \"2a864188-ada6-4ec2-bf9f-72dab210f0ce\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-9f7lz" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.373204 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e88b021c-c810-4a68-aa48-d8666b52330e-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-tx2pv\" (UID: \"e88b021c-c810-4a68-aa48-d8666b52330e\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.373233 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/01243eca-2966-40a3-9eeb-fa3edc917717-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv\" (UID: \"01243eca-2966-40a3-9eeb-fa3edc917717\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.373259 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4jq4\" (UniqueName: \"kubernetes.io/projected/bf5fd4cc-959e-4878-82e9-b0f90dba6553-kube-api-access-r4jq4\") pod \"redhat-marketplace-2gpbt\" (UID: \"bf5fd4cc-959e-4878-82e9-b0f90dba6553\") " pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.373282 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eb8f3615-9e89-4b51-87a2-7d168c81adf3-images\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.373303 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c322813-b574-4b46-b760-208ccecd01a5-catalog-content\") pod \"community-operators-nfdcz\" (UID: \"1c322813-b574-4b46-b760-208ccecd01a5\") " pod="openshift-marketplace/community-operators-nfdcz" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.373323 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e88b021c-c810-4a68-aa48-d8666b52330e-cert\") pod \"cluster-autoscaler-operator-866dc4744-tx2pv\" (UID: \"e88b021c-c810-4a68-aa48-d8666b52330e\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.373343 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/995ec82c-b593-416a-9287-6020a484855c-utilities\") pod \"redhat-operators-4r6jd\" (UID: \"995ec82c-b593-416a-9287-6020a484855c\") " pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.373367 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/f918d08d-df7c-4e8d-85ba-1c92d766db16-snapshots\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.373389 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q4k8\" (UniqueName: \"kubernetes.io/projected/995ec82c-b593-416a-9287-6020a484855c-kube-api-access-4q4k8\") pod \"redhat-operators-4r6jd\" (UID: \"995ec82c-b593-416a-9287-6020a484855c\") " pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.373409 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bef948b9-eef4-404b-9b49-6e4a2ceea73b-images\") pod \"machine-config-operator-84d549f6d5-vj84b\" (UID: \"bef948b9-eef4-404b-9b49-6e4a2ceea73b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.373432 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z98qs\" (UniqueName: \"kubernetes.io/projected/3898c28b-69b0-46af-b085-37e12d7d80ba-kube-api-access-z98qs\") pod \"cluster-samples-operator-85f7577d78-xjbb5\" (UID: \"3898c28b-69b0-46af-b085-37e12d7d80ba\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xjbb5" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.373452 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27f3789b-85bc-4a6b-9e1e-43901d680842-config\") pod \"machine-approver-6cb57bb5db-5q678\" (UID: \"27f3789b-85bc-4a6b-9e1e-43901d680842\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.373696 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/01243eca-2966-40a3-9eeb-fa3edc917717-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv\" (UID: \"01243eca-2966-40a3-9eeb-fa3edc917717\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.373777 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/25781967-12ce-490e-94aa-9b9722f495da-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-s98kp\" (UID: \"25781967-12ce-490e-94aa-9b9722f495da\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-s98kp" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.374016 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-tmpfs\") pod \"packageserver-c8d87f55b-gsv6r\" (UID: \"50a2c23f-26af-4c7f-8ea6-996bcfe173d0\") " pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.374060 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jcqf\" (UniqueName: \"kubernetes.io/projected/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-kube-api-access-2jcqf\") pod \"packageserver-c8d87f55b-gsv6r\" (UID: \"50a2c23f-26af-4c7f-8ea6-996bcfe173d0\") " pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.374098 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c322813-b574-4b46-b760-208ccecd01a5-utilities\") pod \"community-operators-nfdcz\" (UID: \"1c322813-b574-4b46-b760-208ccecd01a5\") " pod="openshift-marketplace/community-operators-nfdcz" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.374211 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e88b021c-c810-4a68-aa48-d8666b52330e-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-tx2pv\" (UID: \"e88b021c-c810-4a68-aa48-d8666b52330e\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.375182 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fbs4\" (UniqueName: \"kubernetes.io/projected/1c322813-b574-4b46-b760-208ccecd01a5-kube-api-access-9fbs4\") pod \"community-operators-nfdcz\" (UID: \"1c322813-b574-4b46-b760-208ccecd01a5\") " pod="openshift-marketplace/community-operators-nfdcz" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.375196 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c322813-b574-4b46-b760-208ccecd01a5-catalog-content\") pod \"community-operators-nfdcz\" (UID: \"1c322813-b574-4b46-b760-208ccecd01a5\") " pod="openshift-marketplace/community-operators-nfdcz" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.375237 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2fcd92f-0a58-4c87-8213-715453486aca-catalog-content\") pod \"certified-operators-5x8lj\" (UID: \"f2fcd92f-0a58-4c87-8213-715453486aca\") " pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.375276 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f918d08d-df7c-4e8d-85ba-1c92d766db16-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.376162 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/01243eca-2966-40a3-9eeb-fa3edc917717-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv\" (UID: \"01243eca-2966-40a3-9eeb-fa3edc917717\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.376740 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2fcd92f-0a58-4c87-8213-715453486aca-catalog-content\") pod \"certified-operators-5x8lj\" (UID: \"f2fcd92f-0a58-4c87-8213-715453486aca\") " pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.377011 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c322813-b574-4b46-b760-208ccecd01a5-utilities\") pod \"community-operators-nfdcz\" (UID: \"1c322813-b574-4b46-b760-208ccecd01a5\") " pod="openshift-marketplace/community-operators-nfdcz" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.377287 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eb8f3615-9e89-4b51-87a2-7d168c81adf3-images\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.377466 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/01243eca-2966-40a3-9eeb-fa3edc917717-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv\" (UID: \"01243eca-2966-40a3-9eeb-fa3edc917717\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.377561 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwnvl\" (UniqueName: \"kubernetes.io/projected/f2fcd92f-0a58-4c87-8213-715453486aca-kube-api-access-zwnvl\") pod \"certified-operators-5x8lj\" (UID: \"f2fcd92f-0a58-4c87-8213-715453486aca\") " pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.377628 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/995ec82c-b593-416a-9287-6020a484855c-catalog-content\") pod \"redhat-operators-4r6jd\" (UID: \"995ec82c-b593-416a-9287-6020a484855c\") " pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.377671 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-apiservice-cert\") pod \"packageserver-c8d87f55b-gsv6r\" (UID: \"50a2c23f-26af-4c7f-8ea6-996bcfe173d0\") " pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.377711 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5fd4cc-959e-4878-82e9-b0f90dba6553-utilities\") pod \"redhat-marketplace-2gpbt\" (UID: \"bf5fd4cc-959e-4878-82e9-b0f90dba6553\") " pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.377740 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bef948b9-eef4-404b-9b49-6e4a2ceea73b-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-vj84b\" (UID: \"bef948b9-eef4-404b-9b49-6e4a2ceea73b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.377779 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bef948b9-eef4-404b-9b49-6e4a2ceea73b-proxy-tls\") pod \"machine-config-operator-84d549f6d5-vj84b\" (UID: \"bef948b9-eef4-404b-9b49-6e4a2ceea73b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.377815 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fdb52116-9c55-4464-99c8-fc2e4559996b-images\") pod \"machine-api-operator-6fbb6cf6f9-n4t2h\" (UID: \"fdb52116-9c55-4464-99c8-fc2e4559996b\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.377853 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0cd1cf7-be6f-4baf-8761-69c693476de9-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-9xqgw\" (UID: \"a0cd1cf7-be6f-4baf-8761-69c693476de9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.377886 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3898c28b-69b0-46af-b085-37e12d7d80ba-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-xjbb5\" (UID: \"3898c28b-69b0-46af-b085-37e12d7d80ba\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xjbb5" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.377922 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fdb52116-9c55-4464-99c8-fc2e4559996b-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-n4t2h\" (UID: \"fdb52116-9c55-4464-99c8-fc2e4559996b\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.377950 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5fd4cc-959e-4878-82e9-b0f90dba6553-catalog-content\") pod \"redhat-marketplace-2gpbt\" (UID: \"bf5fd4cc-959e-4878-82e9-b0f90dba6553\") " pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.377980 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2fcd92f-0a58-4c87-8213-715453486aca-utilities\") pod \"certified-operators-5x8lj\" (UID: \"f2fcd92f-0a58-4c87-8213-715453486aca\") " pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.378013 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj95l\" (UniqueName: \"kubernetes.io/projected/eb8f3615-9e89-4b51-87a2-7d168c81adf3-kube-api-access-mj95l\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.378041 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a864188-ada6-4ec2-bf9f-72dab210f0ce-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-9f7lz\" (UID: \"2a864188-ada6-4ec2-bf9f-72dab210f0ce\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-9f7lz" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.378090 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kckrf\" (UniqueName: \"kubernetes.io/projected/01243eca-2966-40a3-9eeb-fa3edc917717-kube-api-access-kckrf\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv\" (UID: \"01243eca-2966-40a3-9eeb-fa3edc917717\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.378125 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f918d08d-df7c-4e8d-85ba-1c92d766db16-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.378163 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-webhook-cert\") pod \"packageserver-c8d87f55b-gsv6r\" (UID: \"50a2c23f-26af-4c7f-8ea6-996bcfe173d0\") " pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 08:52:58.378165 master-0 kubenswrapper[6976]: I0318 08:52:58.378192 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/27f3789b-85bc-4a6b-9e1e-43901d680842-machine-approver-tls\") pod \"machine-approver-6cb57bb5db-5q678\" (UID: \"27f3789b-85bc-4a6b-9e1e-43901d680842\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" Mar 18 08:52:58.379624 master-0 kubenswrapper[6976]: I0318 08:52:58.378222 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpwnv\" (UniqueName: \"kubernetes.io/projected/27f3789b-85bc-4a6b-9e1e-43901d680842-kube-api-access-gpwnv\") pod \"machine-approver-6cb57bb5db-5q678\" (UID: \"27f3789b-85bc-4a6b-9e1e-43901d680842\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" Mar 18 08:52:58.379624 master-0 kubenswrapper[6976]: I0318 08:52:58.378254 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb8f3615-9e89-4b51-87a2-7d168c81adf3-config\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 08:52:58.379624 master-0 kubenswrapper[6976]: I0318 08:52:58.378296 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/01243eca-2966-40a3-9eeb-fa3edc917717-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv\" (UID: \"01243eca-2966-40a3-9eeb-fa3edc917717\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" Mar 18 08:52:58.379624 master-0 kubenswrapper[6976]: I0318 08:52:58.378312 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5cgw\" (UniqueName: \"kubernetes.io/projected/25781967-12ce-490e-94aa-9b9722f495da-kube-api-access-z5cgw\") pod \"control-plane-machine-set-operator-6f97756bc8-s98kp\" (UID: \"25781967-12ce-490e-94aa-9b9722f495da\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-s98kp" Mar 18 08:52:58.379624 master-0 kubenswrapper[6976]: I0318 08:52:58.378365 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f918d08d-df7c-4e8d-85ba-1c92d766db16-serving-cert\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 08:52:58.379624 master-0 kubenswrapper[6976]: I0318 08:52:58.378392 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/01243eca-2966-40a3-9eeb-fa3edc917717-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv\" (UID: \"01243eca-2966-40a3-9eeb-fa3edc917717\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" Mar 18 08:52:58.379624 master-0 kubenswrapper[6976]: I0318 08:52:58.378426 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ggjn\" (UniqueName: \"kubernetes.io/projected/a0cd1cf7-be6f-4baf-8761-69c693476de9-kube-api-access-2ggjn\") pod \"cloud-credential-operator-744f9dbf77-9xqgw\" (UID: \"a0cd1cf7-be6f-4baf-8761-69c693476de9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" Mar 18 08:52:58.379624 master-0 kubenswrapper[6976]: I0318 08:52:58.378458 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdb52116-9c55-4464-99c8-fc2e4559996b-config\") pod \"machine-api-operator-6fbb6cf6f9-n4t2h\" (UID: \"fdb52116-9c55-4464-99c8-fc2e4559996b\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 08:52:58.379624 master-0 kubenswrapper[6976]: I0318 08:52:58.378531 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/27f3789b-85bc-4a6b-9e1e-43901d680842-auth-proxy-config\") pod \"machine-approver-6cb57bb5db-5q678\" (UID: \"27f3789b-85bc-4a6b-9e1e-43901d680842\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" Mar 18 08:52:58.379624 master-0 kubenswrapper[6976]: I0318 08:52:58.378581 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0cd1cf7-be6f-4baf-8761-69c693476de9-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-9xqgw\" (UID: \"a0cd1cf7-be6f-4baf-8761-69c693476de9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" Mar 18 08:52:58.379624 master-0 kubenswrapper[6976]: I0318 08:52:58.378609 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnn98\" (UniqueName: \"kubernetes.io/projected/bef948b9-eef4-404b-9b49-6e4a2ceea73b-kube-api-access-mnn98\") pod \"machine-config-operator-84d549f6d5-vj84b\" (UID: \"bef948b9-eef4-404b-9b49-6e4a2ceea73b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 08:52:58.379624 master-0 kubenswrapper[6976]: I0318 08:52:58.378652 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6p7s\" (UniqueName: \"kubernetes.io/projected/f918d08d-df7c-4e8d-85ba-1c92d766db16-kube-api-access-l6p7s\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 08:52:58.379624 master-0 kubenswrapper[6976]: I0318 08:52:58.378688 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzrxv\" (UniqueName: \"kubernetes.io/projected/fdb52116-9c55-4464-99c8-fc2e4559996b-kube-api-access-xzrxv\") pod \"machine-api-operator-6fbb6cf6f9-n4t2h\" (UID: \"fdb52116-9c55-4464-99c8-fc2e4559996b\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 08:52:58.379624 master-0 kubenswrapper[6976]: I0318 08:52:58.378730 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k22wv\" (UniqueName: \"kubernetes.io/projected/e88b021c-c810-4a68-aa48-d8666b52330e-kube-api-access-k22wv\") pod \"cluster-autoscaler-operator-866dc4744-tx2pv\" (UID: \"e88b021c-c810-4a68-aa48-d8666b52330e\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" Mar 18 08:52:58.379624 master-0 kubenswrapper[6976]: I0318 08:52:58.378763 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb8f3615-9e89-4b51-87a2-7d168c81adf3-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 08:52:58.382095 master-0 kubenswrapper[6976]: I0318 08:52:58.382051 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb8f3615-9e89-4b51-87a2-7d168c81adf3-config\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 08:52:58.382269 master-0 kubenswrapper[6976]: I0318 08:52:58.382230 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdb52116-9c55-4464-99c8-fc2e4559996b-config\") pod \"machine-api-operator-6fbb6cf6f9-n4t2h\" (UID: \"fdb52116-9c55-4464-99c8-fc2e4559996b\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 08:52:58.387590 master-0 kubenswrapper[6976]: I0318 08:52:58.384169 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/01243eca-2966-40a3-9eeb-fa3edc917717-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv\" (UID: \"01243eca-2966-40a3-9eeb-fa3edc917717\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" Mar 18 08:52:58.387590 master-0 kubenswrapper[6976]: I0318 08:52:58.386635 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fdb52116-9c55-4464-99c8-fc2e4559996b-images\") pod \"machine-api-operator-6fbb6cf6f9-n4t2h\" (UID: \"fdb52116-9c55-4464-99c8-fc2e4559996b\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 08:52:58.387766 master-0 kubenswrapper[6976]: I0318 08:52:58.387653 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5fd4cc-959e-4878-82e9-b0f90dba6553-catalog-content\") pod \"redhat-marketplace-2gpbt\" (UID: \"bf5fd4cc-959e-4878-82e9-b0f90dba6553\") " pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 08:52:58.387811 master-0 kubenswrapper[6976]: I0318 08:52:58.387786 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fdb52116-9c55-4464-99c8-fc2e4559996b-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-n4t2h\" (UID: \"fdb52116-9c55-4464-99c8-fc2e4559996b\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 08:52:58.388445 master-0 kubenswrapper[6976]: I0318 08:52:58.387863 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2fcd92f-0a58-4c87-8213-715453486aca-utilities\") pod \"certified-operators-5x8lj\" (UID: \"f2fcd92f-0a58-4c87-8213-715453486aca\") " pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 08:52:58.388445 master-0 kubenswrapper[6976]: I0318 08:52:58.387872 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5fd4cc-959e-4878-82e9-b0f90dba6553-utilities\") pod \"redhat-marketplace-2gpbt\" (UID: \"bf5fd4cc-959e-4878-82e9-b0f90dba6553\") " pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 08:52:58.394665 master-0 kubenswrapper[6976]: I0318 08:52:58.391158 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/01243eca-2966-40a3-9eeb-fa3edc917717-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv\" (UID: \"01243eca-2966-40a3-9eeb-fa3edc917717\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" Mar 18 08:52:58.398598 master-0 kubenswrapper[6976]: I0318 08:52:58.395752 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fbs4\" (UniqueName: \"kubernetes.io/projected/1c322813-b574-4b46-b760-208ccecd01a5-kube-api-access-9fbs4\") pod \"community-operators-nfdcz\" (UID: \"1c322813-b574-4b46-b760-208ccecd01a5\") " pod="openshift-marketplace/community-operators-nfdcz" Mar 18 08:52:58.398598 master-0 kubenswrapper[6976]: I0318 08:52:58.395821 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r"] Mar 18 08:52:58.401797 master-0 kubenswrapper[6976]: I0318 08:52:58.399974 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xjbb5"] Mar 18 08:52:58.401797 master-0 kubenswrapper[6976]: I0318 08:52:58.401582 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e88b021c-c810-4a68-aa48-d8666b52330e-cert\") pod \"cluster-autoscaler-operator-866dc4744-tx2pv\" (UID: \"e88b021c-c810-4a68-aa48-d8666b52330e\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" Mar 18 08:52:58.403537 master-0 kubenswrapper[6976]: I0318 08:52:58.403499 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eb8f3615-9e89-4b51-87a2-7d168c81adf3-cert\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 08:52:58.403624 master-0 kubenswrapper[6976]: I0318 08:52:58.403548 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4jq4\" (UniqueName: \"kubernetes.io/projected/bf5fd4cc-959e-4878-82e9-b0f90dba6553-kube-api-access-r4jq4\") pod \"redhat-marketplace-2gpbt\" (UID: \"bf5fd4cc-959e-4878-82e9-b0f90dba6553\") " pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 08:52:58.403900 master-0 kubenswrapper[6976]: I0318 08:52:58.403868 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb8f3615-9e89-4b51-87a2-7d168c81adf3-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 08:52:58.405847 master-0 kubenswrapper[6976]: I0318 08:52:58.405819 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj95l\" (UniqueName: \"kubernetes.io/projected/eb8f3615-9e89-4b51-87a2-7d168c81adf3-kube-api-access-mj95l\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 08:52:58.406525 master-0 kubenswrapper[6976]: I0318 08:52:58.406095 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kckrf\" (UniqueName: \"kubernetes.io/projected/01243eca-2966-40a3-9eeb-fa3edc917717-kube-api-access-kckrf\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv\" (UID: \"01243eca-2966-40a3-9eeb-fa3edc917717\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" Mar 18 08:52:58.406525 master-0 kubenswrapper[6976]: I0318 08:52:58.406162 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-89rtc"] Mar 18 08:52:58.408062 master-0 kubenswrapper[6976]: I0318 08:52:58.408019 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwnvl\" (UniqueName: \"kubernetes.io/projected/f2fcd92f-0a58-4c87-8213-715453486aca-kube-api-access-zwnvl\") pod \"certified-operators-5x8lj\" (UID: \"f2fcd92f-0a58-4c87-8213-715453486aca\") " pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 08:52:58.413501 master-0 kubenswrapper[6976]: I0318 08:52:58.411180 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k22wv\" (UniqueName: \"kubernetes.io/projected/e88b021c-c810-4a68-aa48-d8666b52330e-kube-api-access-k22wv\") pod \"cluster-autoscaler-operator-866dc4744-tx2pv\" (UID: \"e88b021c-c810-4a68-aa48-d8666b52330e\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" Mar 18 08:52:58.415038 master-0 kubenswrapper[6976]: I0318 08:52:58.414970 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzrxv\" (UniqueName: \"kubernetes.io/projected/fdb52116-9c55-4464-99c8-fc2e4559996b-kube-api-access-xzrxv\") pod \"machine-api-operator-6fbb6cf6f9-n4t2h\" (UID: \"fdb52116-9c55-4464-99c8-fc2e4559996b\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 08:52:58.444808 master-0 kubenswrapper[6976]: I0318 08:52:58.444756 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 08:52:58.482136 master-0 kubenswrapper[6976]: I0318 08:52:58.482091 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/f918d08d-df7c-4e8d-85ba-1c92d766db16-snapshots\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 08:52:58.482233 master-0 kubenswrapper[6976]: I0318 08:52:58.482147 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q4k8\" (UniqueName: \"kubernetes.io/projected/995ec82c-b593-416a-9287-6020a484855c-kube-api-access-4q4k8\") pod \"redhat-operators-4r6jd\" (UID: \"995ec82c-b593-416a-9287-6020a484855c\") " pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 08:52:58.482233 master-0 kubenswrapper[6976]: I0318 08:52:58.482176 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bef948b9-eef4-404b-9b49-6e4a2ceea73b-images\") pod \"machine-config-operator-84d549f6d5-vj84b\" (UID: \"bef948b9-eef4-404b-9b49-6e4a2ceea73b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 08:52:58.482312 master-0 kubenswrapper[6976]: I0318 08:52:58.482231 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z98qs\" (UniqueName: \"kubernetes.io/projected/3898c28b-69b0-46af-b085-37e12d7d80ba-kube-api-access-z98qs\") pod \"cluster-samples-operator-85f7577d78-xjbb5\" (UID: \"3898c28b-69b0-46af-b085-37e12d7d80ba\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xjbb5" Mar 18 08:52:58.482312 master-0 kubenswrapper[6976]: I0318 08:52:58.482263 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27f3789b-85bc-4a6b-9e1e-43901d680842-config\") pod \"machine-approver-6cb57bb5db-5q678\" (UID: \"27f3789b-85bc-4a6b-9e1e-43901d680842\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" Mar 18 08:52:58.482312 master-0 kubenswrapper[6976]: I0318 08:52:58.482289 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-tmpfs\") pod \"packageserver-c8d87f55b-gsv6r\" (UID: \"50a2c23f-26af-4c7f-8ea6-996bcfe173d0\") " pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 08:52:58.482395 master-0 kubenswrapper[6976]: I0318 08:52:58.482334 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/25781967-12ce-490e-94aa-9b9722f495da-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-s98kp\" (UID: \"25781967-12ce-490e-94aa-9b9722f495da\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-s98kp" Mar 18 08:52:58.482395 master-0 kubenswrapper[6976]: I0318 08:52:58.482361 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jcqf\" (UniqueName: \"kubernetes.io/projected/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-kube-api-access-2jcqf\") pod \"packageserver-c8d87f55b-gsv6r\" (UID: \"50a2c23f-26af-4c7f-8ea6-996bcfe173d0\") " pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 08:52:58.482465 master-0 kubenswrapper[6976]: I0318 08:52:58.482396 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f918d08d-df7c-4e8d-85ba-1c92d766db16-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 08:52:58.482509 master-0 kubenswrapper[6976]: I0318 08:52:58.482462 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/995ec82c-b593-416a-9287-6020a484855c-catalog-content\") pod \"redhat-operators-4r6jd\" (UID: \"995ec82c-b593-416a-9287-6020a484855c\") " pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 08:52:58.482693 master-0 kubenswrapper[6976]: I0318 08:52:58.482669 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-apiservice-cert\") pod \"packageserver-c8d87f55b-gsv6r\" (UID: \"50a2c23f-26af-4c7f-8ea6-996bcfe173d0\") " pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 08:52:58.482765 master-0 kubenswrapper[6976]: I0318 08:52:58.482742 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bef948b9-eef4-404b-9b49-6e4a2ceea73b-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-vj84b\" (UID: \"bef948b9-eef4-404b-9b49-6e4a2ceea73b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 08:52:58.483251 master-0 kubenswrapper[6976]: I0318 08:52:58.483208 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-tmpfs\") pod \"packageserver-c8d87f55b-gsv6r\" (UID: \"50a2c23f-26af-4c7f-8ea6-996bcfe173d0\") " pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 08:52:58.484064 master-0 kubenswrapper[6976]: I0318 08:52:58.484033 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/f918d08d-df7c-4e8d-85ba-1c92d766db16-snapshots\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 08:52:58.484188 master-0 kubenswrapper[6976]: I0318 08:52:58.484159 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f918d08d-df7c-4e8d-85ba-1c92d766db16-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 08:52:58.484532 master-0 kubenswrapper[6976]: I0318 08:52:58.484472 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/995ec82c-b593-416a-9287-6020a484855c-catalog-content\") pod \"redhat-operators-4r6jd\" (UID: \"995ec82c-b593-416a-9287-6020a484855c\") " pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 08:52:58.485584 master-0 kubenswrapper[6976]: I0318 08:52:58.485519 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-apiservice-cert\") pod \"packageserver-c8d87f55b-gsv6r\" (UID: \"50a2c23f-26af-4c7f-8ea6-996bcfe173d0\") " pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 08:52:58.485765 master-0 kubenswrapper[6976]: I0318 08:52:58.485722 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27f3789b-85bc-4a6b-9e1e-43901d680842-config\") pod \"machine-approver-6cb57bb5db-5q678\" (UID: \"27f3789b-85bc-4a6b-9e1e-43901d680842\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" Mar 18 08:52:58.486946 master-0 kubenswrapper[6976]: I0318 08:52:58.486906 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bef948b9-eef4-404b-9b49-6e4a2ceea73b-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-vj84b\" (UID: \"bef948b9-eef4-404b-9b49-6e4a2ceea73b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 08:52:58.487022 master-0 kubenswrapper[6976]: I0318 08:52:58.484284 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bef948b9-eef4-404b-9b49-6e4a2ceea73b-proxy-tls\") pod \"machine-config-operator-84d549f6d5-vj84b\" (UID: \"bef948b9-eef4-404b-9b49-6e4a2ceea73b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 08:52:58.487054 master-0 kubenswrapper[6976]: I0318 08:52:58.487032 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0cd1cf7-be6f-4baf-8761-69c693476de9-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-9xqgw\" (UID: \"a0cd1cf7-be6f-4baf-8761-69c693476de9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" Mar 18 08:52:58.487086 master-0 kubenswrapper[6976]: I0318 08:52:58.487065 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3898c28b-69b0-46af-b085-37e12d7d80ba-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-xjbb5\" (UID: \"3898c28b-69b0-46af-b085-37e12d7d80ba\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xjbb5" Mar 18 08:52:58.487192 master-0 kubenswrapper[6976]: I0318 08:52:58.487130 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a864188-ada6-4ec2-bf9f-72dab210f0ce-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-9f7lz\" (UID: \"2a864188-ada6-4ec2-bf9f-72dab210f0ce\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-9f7lz" Mar 18 08:52:58.487250 master-0 kubenswrapper[6976]: I0318 08:52:58.487205 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f918d08d-df7c-4e8d-85ba-1c92d766db16-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 08:52:58.487250 master-0 kubenswrapper[6976]: I0318 08:52:58.487242 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-webhook-cert\") pod \"packageserver-c8d87f55b-gsv6r\" (UID: \"50a2c23f-26af-4c7f-8ea6-996bcfe173d0\") " pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 08:52:58.487330 master-0 kubenswrapper[6976]: I0318 08:52:58.487267 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/27f3789b-85bc-4a6b-9e1e-43901d680842-machine-approver-tls\") pod \"machine-approver-6cb57bb5db-5q678\" (UID: \"27f3789b-85bc-4a6b-9e1e-43901d680842\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" Mar 18 08:52:58.487330 master-0 kubenswrapper[6976]: I0318 08:52:58.487296 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpwnv\" (UniqueName: \"kubernetes.io/projected/27f3789b-85bc-4a6b-9e1e-43901d680842-kube-api-access-gpwnv\") pod \"machine-approver-6cb57bb5db-5q678\" (UID: \"27f3789b-85bc-4a6b-9e1e-43901d680842\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" Mar 18 08:52:58.487409 master-0 kubenswrapper[6976]: I0318 08:52:58.487349 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5cgw\" (UniqueName: \"kubernetes.io/projected/25781967-12ce-490e-94aa-9b9722f495da-kube-api-access-z5cgw\") pod \"control-plane-machine-set-operator-6f97756bc8-s98kp\" (UID: \"25781967-12ce-490e-94aa-9b9722f495da\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-s98kp" Mar 18 08:52:58.487409 master-0 kubenswrapper[6976]: I0318 08:52:58.487401 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f918d08d-df7c-4e8d-85ba-1c92d766db16-serving-cert\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 08:52:58.487584 master-0 kubenswrapper[6976]: I0318 08:52:58.487438 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/27f3789b-85bc-4a6b-9e1e-43901d680842-auth-proxy-config\") pod \"machine-approver-6cb57bb5db-5q678\" (UID: \"27f3789b-85bc-4a6b-9e1e-43901d680842\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" Mar 18 08:52:58.487584 master-0 kubenswrapper[6976]: I0318 08:52:58.487462 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ggjn\" (UniqueName: \"kubernetes.io/projected/a0cd1cf7-be6f-4baf-8761-69c693476de9-kube-api-access-2ggjn\") pod \"cloud-credential-operator-744f9dbf77-9xqgw\" (UID: \"a0cd1cf7-be6f-4baf-8761-69c693476de9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" Mar 18 08:52:58.487584 master-0 kubenswrapper[6976]: I0318 08:52:58.487492 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0cd1cf7-be6f-4baf-8761-69c693476de9-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-9xqgw\" (UID: \"a0cd1cf7-be6f-4baf-8761-69c693476de9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" Mar 18 08:52:58.487584 master-0 kubenswrapper[6976]: I0318 08:52:58.487522 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnn98\" (UniqueName: \"kubernetes.io/projected/bef948b9-eef4-404b-9b49-6e4a2ceea73b-kube-api-access-mnn98\") pod \"machine-config-operator-84d549f6d5-vj84b\" (UID: \"bef948b9-eef4-404b-9b49-6e4a2ceea73b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 08:52:58.487584 master-0 kubenswrapper[6976]: I0318 08:52:58.487554 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6p7s\" (UniqueName: \"kubernetes.io/projected/f918d08d-df7c-4e8d-85ba-1c92d766db16-kube-api-access-l6p7s\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 08:52:58.488690 master-0 kubenswrapper[6976]: I0318 08:52:58.488643 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f918d08d-df7c-4e8d-85ba-1c92d766db16-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 08:52:58.488690 master-0 kubenswrapper[6976]: I0318 08:52:58.488666 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0cd1cf7-be6f-4baf-8761-69c693476de9-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-9xqgw\" (UID: \"a0cd1cf7-be6f-4baf-8761-69c693476de9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" Mar 18 08:52:58.489182 master-0 kubenswrapper[6976]: I0318 08:52:58.489084 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csfl2\" (UniqueName: \"kubernetes.io/projected/2a864188-ada6-4ec2-bf9f-72dab210f0ce-kube-api-access-csfl2\") pod \"cluster-storage-operator-7d87854d6-9f7lz\" (UID: \"2a864188-ada6-4ec2-bf9f-72dab210f0ce\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-9f7lz" Mar 18 08:52:58.489182 master-0 kubenswrapper[6976]: I0318 08:52:58.489166 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/995ec82c-b593-416a-9287-6020a484855c-utilities\") pod \"redhat-operators-4r6jd\" (UID: \"995ec82c-b593-416a-9287-6020a484855c\") " pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 08:52:58.489592 master-0 kubenswrapper[6976]: I0318 08:52:58.489553 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/995ec82c-b593-416a-9287-6020a484855c-utilities\") pod \"redhat-operators-4r6jd\" (UID: \"995ec82c-b593-416a-9287-6020a484855c\") " pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 08:52:58.490490 master-0 kubenswrapper[6976]: I0318 08:52:58.490442 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bef948b9-eef4-404b-9b49-6e4a2ceea73b-images\") pod \"machine-config-operator-84d549f6d5-vj84b\" (UID: \"bef948b9-eef4-404b-9b49-6e4a2ceea73b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 08:52:58.490558 master-0 kubenswrapper[6976]: I0318 08:52:58.490503 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/25781967-12ce-490e-94aa-9b9722f495da-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-s98kp\" (UID: \"25781967-12ce-490e-94aa-9b9722f495da\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-s98kp" Mar 18 08:52:58.493465 master-0 kubenswrapper[6976]: I0318 08:52:58.491609 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/27f3789b-85bc-4a6b-9e1e-43901d680842-auth-proxy-config\") pod \"machine-approver-6cb57bb5db-5q678\" (UID: \"27f3789b-85bc-4a6b-9e1e-43901d680842\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" Mar 18 08:52:58.493465 master-0 kubenswrapper[6976]: I0318 08:52:58.493326 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-webhook-cert\") pod \"packageserver-c8d87f55b-gsv6r\" (UID: \"50a2c23f-26af-4c7f-8ea6-996bcfe173d0\") " pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 08:52:58.494098 master-0 kubenswrapper[6976]: I0318 08:52:58.494072 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f918d08d-df7c-4e8d-85ba-1c92d766db16-serving-cert\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 08:52:58.495297 master-0 kubenswrapper[6976]: I0318 08:52:58.495258 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/27f3789b-85bc-4a6b-9e1e-43901d680842-machine-approver-tls\") pod \"machine-approver-6cb57bb5db-5q678\" (UID: \"27f3789b-85bc-4a6b-9e1e-43901d680842\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" Mar 18 08:52:58.495507 master-0 kubenswrapper[6976]: I0318 08:52:58.495473 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0cd1cf7-be6f-4baf-8761-69c693476de9-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-9xqgw\" (UID: \"a0cd1cf7-be6f-4baf-8761-69c693476de9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" Mar 18 08:52:58.495771 master-0 kubenswrapper[6976]: I0318 08:52:58.495675 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a864188-ada6-4ec2-bf9f-72dab210f0ce-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-9f7lz\" (UID: \"2a864188-ada6-4ec2-bf9f-72dab210f0ce\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-9f7lz" Mar 18 08:52:58.496935 master-0 kubenswrapper[6976]: I0318 08:52:58.496900 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bef948b9-eef4-404b-9b49-6e4a2ceea73b-proxy-tls\") pod \"machine-config-operator-84d549f6d5-vj84b\" (UID: \"bef948b9-eef4-404b-9b49-6e4a2ceea73b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 08:52:58.501939 master-0 kubenswrapper[6976]: I0318 08:52:58.500821 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jcqf\" (UniqueName: \"kubernetes.io/projected/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-kube-api-access-2jcqf\") pod \"packageserver-c8d87f55b-gsv6r\" (UID: \"50a2c23f-26af-4c7f-8ea6-996bcfe173d0\") " pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 08:52:58.504669 master-0 kubenswrapper[6976]: I0318 08:52:58.504641 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q4k8\" (UniqueName: \"kubernetes.io/projected/995ec82c-b593-416a-9287-6020a484855c-kube-api-access-4q4k8\") pod \"redhat-operators-4r6jd\" (UID: \"995ec82c-b593-416a-9287-6020a484855c\") " pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 08:52:58.505696 master-0 kubenswrapper[6976]: I0318 08:52:58.505666 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 08:52:58.505927 master-0 kubenswrapper[6976]: I0318 08:52:58.505903 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpwnv\" (UniqueName: \"kubernetes.io/projected/27f3789b-85bc-4a6b-9e1e-43901d680842-kube-api-access-gpwnv\") pod \"machine-approver-6cb57bb5db-5q678\" (UID: \"27f3789b-85bc-4a6b-9e1e-43901d680842\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" Mar 18 08:52:58.509123 master-0 kubenswrapper[6976]: I0318 08:52:58.509083 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3898c28b-69b0-46af-b085-37e12d7d80ba-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-xjbb5\" (UID: \"3898c28b-69b0-46af-b085-37e12d7d80ba\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xjbb5" Mar 18 08:52:58.509123 master-0 kubenswrapper[6976]: I0318 08:52:58.509098 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnn98\" (UniqueName: \"kubernetes.io/projected/bef948b9-eef4-404b-9b49-6e4a2ceea73b-kube-api-access-mnn98\") pod \"machine-config-operator-84d549f6d5-vj84b\" (UID: \"bef948b9-eef4-404b-9b49-6e4a2ceea73b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 08:52:58.510852 master-0 kubenswrapper[6976]: I0318 08:52:58.510350 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5cgw\" (UniqueName: \"kubernetes.io/projected/25781967-12ce-490e-94aa-9b9722f495da-kube-api-access-z5cgw\") pod \"control-plane-machine-set-operator-6f97756bc8-s98kp\" (UID: \"25781967-12ce-490e-94aa-9b9722f495da\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-s98kp" Mar 18 08:52:58.512363 master-0 kubenswrapper[6976]: I0318 08:52:58.512330 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6p7s\" (UniqueName: \"kubernetes.io/projected/f918d08d-df7c-4e8d-85ba-1c92d766db16-kube-api-access-l6p7s\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 08:52:58.516331 master-0 kubenswrapper[6976]: I0318 08:52:58.516157 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z98qs\" (UniqueName: \"kubernetes.io/projected/3898c28b-69b0-46af-b085-37e12d7d80ba-kube-api-access-z98qs\") pod \"cluster-samples-operator-85f7577d78-xjbb5\" (UID: \"3898c28b-69b0-46af-b085-37e12d7d80ba\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xjbb5" Mar 18 08:52:58.519887 master-0 kubenswrapper[6976]: I0318 08:52:58.519860 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ggjn\" (UniqueName: \"kubernetes.io/projected/a0cd1cf7-be6f-4baf-8761-69c693476de9-kube-api-access-2ggjn\") pod \"cloud-credential-operator-744f9dbf77-9xqgw\" (UID: \"a0cd1cf7-be6f-4baf-8761-69c693476de9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" Mar 18 08:52:58.521862 master-0 kubenswrapper[6976]: I0318 08:52:58.521827 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" Mar 18 08:52:58.524290 master-0 kubenswrapper[6976]: I0318 08:52:58.524248 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csfl2\" (UniqueName: \"kubernetes.io/projected/2a864188-ada6-4ec2-bf9f-72dab210f0ce-kube-api-access-csfl2\") pod \"cluster-storage-operator-7d87854d6-9f7lz\" (UID: \"2a864188-ada6-4ec2-bf9f-72dab210f0ce\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-9f7lz" Mar 18 08:52:58.540593 master-0 kubenswrapper[6976]: I0318 08:52:58.540552 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-9f7lz" Mar 18 08:52:58.557981 master-0 kubenswrapper[6976]: I0318 08:52:58.557924 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" Mar 18 08:52:58.559263 master-0 kubenswrapper[6976]: I0318 08:52:58.559232 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" Mar 18 08:52:58.574023 master-0 kubenswrapper[6976]: I0318 08:52:58.573955 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-s98kp" Mar 18 08:52:58.609281 master-0 kubenswrapper[6976]: I0318 08:52:58.607734 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 08:52:58.616711 master-0 kubenswrapper[6976]: I0318 08:52:58.615379 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 08:52:58.619451 master-0 kubenswrapper[6976]: I0318 08:52:58.619134 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xjbb5" Mar 18 08:52:58.630640 master-0 kubenswrapper[6976]: I0318 08:52:58.630557 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" Mar 18 08:52:58.640878 master-0 kubenswrapper[6976]: I0318 08:52:58.639005 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 08:52:58.640878 master-0 kubenswrapper[6976]: I0318 08:52:58.640786 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 08:52:58.661311 master-0 kubenswrapper[6976]: I0318 08:52:58.659412 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nfdcz" Mar 18 08:52:58.665061 master-0 kubenswrapper[6976]: I0318 08:52:58.664646 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 08:52:58.683490 master-0 kubenswrapper[6976]: I0318 08:52:58.683313 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 08:52:58.879711 master-0 kubenswrapper[6976]: I0318 08:52:58.879654 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5x8lj"] Mar 18 08:52:58.994592 master-0 kubenswrapper[6976]: I0318 08:52:58.994541 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4r6jd"] Mar 18 08:52:59.013399 master-0 kubenswrapper[6976]: I0318 08:52:59.013360 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw"] Mar 18 08:52:59.025754 master-0 kubenswrapper[6976]: I0318 08:52:59.025727 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-9f7lz"] Mar 18 08:52:59.038826 master-0 kubenswrapper[6976]: W0318 08:52:59.031066 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a864188_ada6_4ec2_bf9f_72dab210f0ce.slice/crio-16c8b28b1f6483c7c92765f4231253e359cc1215e5ae5f3124d625cfaec91b4d WatchSource:0}: Error finding container 16c8b28b1f6483c7c92765f4231253e359cc1215e5ae5f3124d625cfaec91b4d: Status 404 returned error can't find the container with id 16c8b28b1f6483c7c92765f4231253e359cc1215e5ae5f3124d625cfaec91b4d Mar 18 08:52:59.134890 master-0 kubenswrapper[6976]: I0318 08:52:59.134682 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" event={"ID":"a0cd1cf7-be6f-4baf-8761-69c693476de9","Type":"ContainerStarted","Data":"16a1ea739ab8f65d8a4f8df45a743988b1ba71abf3b8764f36d6dbcba21ceced"} Mar 18 08:52:59.135930 master-0 kubenswrapper[6976]: I0318 08:52:59.135905 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4r6jd" event={"ID":"995ec82c-b593-416a-9287-6020a484855c","Type":"ContainerStarted","Data":"89bd968ec5efc46c09a448832705d02b17ad02bc6a428167a08a2238bdb031ed"} Mar 18 08:52:59.139681 master-0 kubenswrapper[6976]: I0318 08:52:59.139185 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" event={"ID":"01243eca-2966-40a3-9eeb-fa3edc917717","Type":"ContainerStarted","Data":"a6d0d50087a2677e8b796853bf55d588c131864c88810e12454811eaee66e456"} Mar 18 08:52:59.140348 master-0 kubenswrapper[6976]: I0318 08:52:59.140282 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" event={"ID":"27f3789b-85bc-4a6b-9e1e-43901d680842","Type":"ContainerStarted","Data":"522f7bc5c6b278c3c5f6254df4edae91677c8c5290845987c8fe09fce0639a5d"} Mar 18 08:52:59.140348 master-0 kubenswrapper[6976]: I0318 08:52:59.140339 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" event={"ID":"27f3789b-85bc-4a6b-9e1e-43901d680842","Type":"ContainerStarted","Data":"dca96d10219b28aa237c36e44e279b95cbad3c38e675c6bef808b41a66303034"} Mar 18 08:52:59.141491 master-0 kubenswrapper[6976]: I0318 08:52:59.141306 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-9f7lz" event={"ID":"2a864188-ada6-4ec2-bf9f-72dab210f0ce","Type":"ContainerStarted","Data":"16c8b28b1f6483c7c92765f4231253e359cc1215e5ae5f3124d625cfaec91b4d"} Mar 18 08:52:59.143007 master-0 kubenswrapper[6976]: I0318 08:52:59.142637 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x8lj" event={"ID":"f2fcd92f-0a58-4c87-8213-715453486aca","Type":"ContainerStarted","Data":"e45b21057937437a963f15e3caed2257e18f92ac6c2b138e44af253b2ed1f746"} Mar 18 08:52:59.143007 master-0 kubenswrapper[6976]: I0318 08:52:59.142660 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x8lj" event={"ID":"f2fcd92f-0a58-4c87-8213-715453486aca","Type":"ContainerStarted","Data":"99b24b432d9d961efa29c66242b9310a2073ba8bdb85f3ff964081d7dab2d588"} Mar 18 08:52:59.199933 master-0 kubenswrapper[6976]: I0318 08:52:59.198816 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv"] Mar 18 08:52:59.351725 master-0 kubenswrapper[6976]: I0318 08:52:59.351669 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-s98kp"] Mar 18 08:52:59.357782 master-0 kubenswrapper[6976]: I0318 08:52:59.357550 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d"] Mar 18 08:52:59.363907 master-0 kubenswrapper[6976]: I0318 08:52:59.363859 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h"] Mar 18 08:52:59.382751 master-0 kubenswrapper[6976]: I0318 08:52:59.382708 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xjbb5"] Mar 18 08:52:59.386684 master-0 kubenswrapper[6976]: I0318 08:52:59.386634 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-89rtc"] Mar 18 08:52:59.579671 master-0 kubenswrapper[6976]: I0318 08:52:59.579352 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nfdcz"] Mar 18 08:52:59.583883 master-0 kubenswrapper[6976]: I0318 08:52:59.583835 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2gpbt"] Mar 18 08:52:59.586118 master-0 kubenswrapper[6976]: I0318 08:52:59.586017 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b"] Mar 18 08:52:59.588347 master-0 kubenswrapper[6976]: I0318 08:52:59.588297 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r"] Mar 18 08:52:59.597110 master-0 kubenswrapper[6976]: W0318 08:52:59.597068 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c322813_b574_4b46_b760_208ccecd01a5.slice/crio-936c1c5ea7d8a039544de89341bf00b6792ab44d21cf236ad59bfd20a0a51ad9 WatchSource:0}: Error finding container 936c1c5ea7d8a039544de89341bf00b6792ab44d21cf236ad59bfd20a0a51ad9: Status 404 returned error can't find the container with id 936c1c5ea7d8a039544de89341bf00b6792ab44d21cf236ad59bfd20a0a51ad9 Mar 18 08:52:59.602180 master-0 kubenswrapper[6976]: W0318 08:52:59.602142 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50a2c23f_26af_4c7f_8ea6_996bcfe173d0.slice/crio-2aab1c96f4b8ffa517d8d222973d3490b850d57a2945be4e4157f78f55403973 WatchSource:0}: Error finding container 2aab1c96f4b8ffa517d8d222973d3490b850d57a2945be4e4157f78f55403973: Status 404 returned error can't find the container with id 2aab1c96f4b8ffa517d8d222973d3490b850d57a2945be4e4157f78f55403973 Mar 18 08:53:00.149490 master-0 kubenswrapper[6976]: I0318 08:53:00.149275 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" event={"ID":"f918d08d-df7c-4e8d-85ba-1c92d766db16","Type":"ContainerStarted","Data":"1e613a3e031cd6ea2569b0de90a9eb4c58efa7686815ccbe34135809d0dec254"} Mar 18 08:53:00.150972 master-0 kubenswrapper[6976]: I0318 08:53:00.150918 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" event={"ID":"50a2c23f-26af-4c7f-8ea6-996bcfe173d0","Type":"ContainerStarted","Data":"a56400fada4e092ccf735fa8531fa1116fbd4a21ce71d40bf93120376f0107c2"} Mar 18 08:53:00.150972 master-0 kubenswrapper[6976]: I0318 08:53:00.150969 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" event={"ID":"50a2c23f-26af-4c7f-8ea6-996bcfe173d0","Type":"ContainerStarted","Data":"2aab1c96f4b8ffa517d8d222973d3490b850d57a2945be4e4157f78f55403973"} Mar 18 08:53:00.151973 master-0 kubenswrapper[6976]: I0318 08:53:00.151916 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" event={"ID":"eb8f3615-9e89-4b51-87a2-7d168c81adf3","Type":"ContainerStarted","Data":"ca6a0275fcdb4cece62e11057aa43e164472b8187f168d1b56f7436a566a153a"} Mar 18 08:53:00.153010 master-0 kubenswrapper[6976]: I0318 08:53:00.152927 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xjbb5" event={"ID":"3898c28b-69b0-46af-b085-37e12d7d80ba","Type":"ContainerStarted","Data":"62a17de80f64346bbd0c33255e42240333a632bbd8223bc931f3c908f3c47ad2"} Mar 18 08:53:00.154084 master-0 kubenswrapper[6976]: I0318 08:53:00.153971 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfdcz" event={"ID":"1c322813-b574-4b46-b760-208ccecd01a5","Type":"ContainerStarted","Data":"936c1c5ea7d8a039544de89341bf00b6792ab44d21cf236ad59bfd20a0a51ad9"} Mar 18 08:53:00.155898 master-0 kubenswrapper[6976]: I0318 08:53:00.155827 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" event={"ID":"a0cd1cf7-be6f-4baf-8761-69c693476de9","Type":"ContainerStarted","Data":"5a456f7a3846dc34c77025024f941aff2ebe4c2315acadbbceadefc2561e1cfe"} Mar 18 08:53:00.157014 master-0 kubenswrapper[6976]: I0318 08:53:00.156959 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" event={"ID":"e88b021c-c810-4a68-aa48-d8666b52330e","Type":"ContainerStarted","Data":"d6ccfac081e99c6c412564f51ffac7d61d3130a5f00a98585c4f3e1f5ce5443d"} Mar 18 08:53:00.158616 master-0 kubenswrapper[6976]: I0318 08:53:00.158583 6976 generic.go:334] "Generic (PLEG): container finished" podID="995ec82c-b593-416a-9287-6020a484855c" containerID="ee6511dee404aac71ff58b974ad8491dbd1ce0b8a6ad263b0d8e251dc9d1b943" exitCode=0 Mar 18 08:53:00.158733 master-0 kubenswrapper[6976]: I0318 08:53:00.158676 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4r6jd" event={"ID":"995ec82c-b593-416a-9287-6020a484855c","Type":"ContainerDied","Data":"ee6511dee404aac71ff58b974ad8491dbd1ce0b8a6ad263b0d8e251dc9d1b943"} Mar 18 08:53:00.161145 master-0 kubenswrapper[6976]: I0318 08:53:00.160367 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" event={"ID":"fdb52116-9c55-4464-99c8-fc2e4559996b","Type":"ContainerStarted","Data":"6375035b1c9934af015884220573de3eaa67c7e1e900f78356d6b3c37aa38e9e"} Mar 18 08:53:00.161145 master-0 kubenswrapper[6976]: I0318 08:53:00.160408 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" event={"ID":"fdb52116-9c55-4464-99c8-fc2e4559996b","Type":"ContainerStarted","Data":"1307b515e04cb833c9f1e9d6e14d178f8505b7f9e092ede28bdd570b3c7ab5f2"} Mar 18 08:53:00.161836 master-0 kubenswrapper[6976]: I0318 08:53:00.161795 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-s98kp" event={"ID":"25781967-12ce-490e-94aa-9b9722f495da","Type":"ContainerStarted","Data":"9fe02104a8ebb638006892092dba78285ba64eb0d3e1c75a7de249822d587f12"} Mar 18 08:53:00.163278 master-0 kubenswrapper[6976]: I0318 08:53:00.163228 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" event={"ID":"bef948b9-eef4-404b-9b49-6e4a2ceea73b","Type":"ContainerStarted","Data":"96eaa2161390faf1f375ddc84ae530a0b116789db1b95b1822e7462d16878c6d"} Mar 18 08:53:00.163278 master-0 kubenswrapper[6976]: I0318 08:53:00.163257 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" event={"ID":"bef948b9-eef4-404b-9b49-6e4a2ceea73b","Type":"ContainerStarted","Data":"7d99052b3134ac6e3a86c06ba3a47b78c6cc784b483d36aa7d9f44db2d29bc24"} Mar 18 08:53:00.164410 master-0 kubenswrapper[6976]: I0318 08:53:00.164363 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gpbt" event={"ID":"bf5fd4cc-959e-4878-82e9-b0f90dba6553","Type":"ContainerStarted","Data":"b72ac994264149152fe27ab0a6c3a137789afbe22f9ace579dcf4e093554cfc8"} Mar 18 08:53:00.165744 master-0 kubenswrapper[6976]: I0318 08:53:00.165702 6976 generic.go:334] "Generic (PLEG): container finished" podID="f2fcd92f-0a58-4c87-8213-715453486aca" containerID="e45b21057937437a963f15e3caed2257e18f92ac6c2b138e44af253b2ed1f746" exitCode=0 Mar 18 08:53:00.165744 master-0 kubenswrapper[6976]: I0318 08:53:00.165730 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x8lj" event={"ID":"f2fcd92f-0a58-4c87-8213-715453486aca","Type":"ContainerDied","Data":"e45b21057937437a963f15e3caed2257e18f92ac6c2b138e44af253b2ed1f746"} Mar 18 08:53:02.037972 master-0 kubenswrapper[6976]: I0318 08:53:02.031920 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 08:53:02.115591 master-0 kubenswrapper[6976]: I0318 08:53:02.114863 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" podStartSLOduration=6.114841446 podStartE2EDuration="6.114841446s" podCreationTimestamp="2026-03-18 08:52:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:53:02.092594733 +0000 UTC m=+281.678196338" watchObservedRunningTime="2026-03-18 08:53:02.114841446 +0000 UTC m=+281.700443041" Mar 18 08:53:02.383676 master-0 kubenswrapper[6976]: I0318 08:53:02.383424 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 08:53:03.027466 master-0 kubenswrapper[6976]: I0318 08:53:03.027326 6976 generic.go:334] "Generic (PLEG): container finished" podID="1c322813-b574-4b46-b760-208ccecd01a5" containerID="7403c7f38da67ef9a4e6e3661a1a27ddfd26ac674591d0d6ae38450cf6903ac0" exitCode=0 Mar 18 08:53:03.027466 master-0 kubenswrapper[6976]: I0318 08:53:03.027384 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfdcz" event={"ID":"1c322813-b574-4b46-b760-208ccecd01a5","Type":"ContainerDied","Data":"7403c7f38da67ef9a4e6e3661a1a27ddfd26ac674591d0d6ae38450cf6903ac0"} Mar 18 08:53:03.028425 master-0 kubenswrapper[6976]: I0318 08:53:03.028378 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" event={"ID":"e88b021c-c810-4a68-aa48-d8666b52330e","Type":"ContainerStarted","Data":"bfcbcaea6005e034c0f46ea403d42a9f8698a9d7f77d0b818f8734b9697f99f9"} Mar 18 08:53:03.031956 master-0 kubenswrapper[6976]: I0318 08:53:03.030972 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" event={"ID":"bef948b9-eef4-404b-9b49-6e4a2ceea73b","Type":"ContainerStarted","Data":"970d9453dfead375a6d3688101dd44c0fe364ba03960f502aa99f0f04d563169"} Mar 18 08:53:03.032984 master-0 kubenswrapper[6976]: I0318 08:53:03.032875 6976 generic.go:334] "Generic (PLEG): container finished" podID="bf5fd4cc-959e-4878-82e9-b0f90dba6553" containerID="a4600607ede35bdc684e56df1e32d786c4e72f5ab0392ea420b4029975f14ee2" exitCode=0 Mar 18 08:53:03.032984 master-0 kubenswrapper[6976]: I0318 08:53:03.032917 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gpbt" event={"ID":"bf5fd4cc-959e-4878-82e9-b0f90dba6553","Type":"ContainerDied","Data":"a4600607ede35bdc684e56df1e32d786c4e72f5ab0392ea420b4029975f14ee2"} Mar 18 08:53:03.135476 master-0 kubenswrapper[6976]: I0318 08:53:03.135372 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" podStartSLOduration=8.135346583 podStartE2EDuration="8.135346583s" podCreationTimestamp="2026-03-18 08:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:53:03.1309425 +0000 UTC m=+282.716544145" watchObservedRunningTime="2026-03-18 08:53:03.135346583 +0000 UTC m=+282.720948178" Mar 18 08:53:05.369516 master-0 kubenswrapper[6976]: I0318 08:53:05.369462 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-rhm2f"] Mar 18 08:53:05.370626 master-0 kubenswrapper[6976]: I0318 08:53:05.370600 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 08:53:05.372410 master-0 kubenswrapper[6976]: I0318 08:53:05.372091 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-xhpr4" Mar 18 08:53:05.372410 master-0 kubenswrapper[6976]: I0318 08:53:05.372250 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 18 08:53:05.503466 master-0 kubenswrapper[6976]: I0318 08:53:05.503426 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-proxy-tls\") pod \"machine-config-daemon-rhm2f\" (UID: \"a7cf2cff-ca67-4cc6-99e7-99478ab89af4\") " pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 08:53:05.503669 master-0 kubenswrapper[6976]: I0318 08:53:05.503511 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-rootfs\") pod \"machine-config-daemon-rhm2f\" (UID: \"a7cf2cff-ca67-4cc6-99e7-99478ab89af4\") " pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 08:53:05.503669 master-0 kubenswrapper[6976]: I0318 08:53:05.503542 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhdc2\" (UniqueName: \"kubernetes.io/projected/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-kube-api-access-vhdc2\") pod \"machine-config-daemon-rhm2f\" (UID: \"a7cf2cff-ca67-4cc6-99e7-99478ab89af4\") " pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 08:53:05.503669 master-0 kubenswrapper[6976]: I0318 08:53:05.503596 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-mcd-auth-proxy-config\") pod \"machine-config-daemon-rhm2f\" (UID: \"a7cf2cff-ca67-4cc6-99e7-99478ab89af4\") " pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 08:53:05.605335 master-0 kubenswrapper[6976]: I0318 08:53:05.605288 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-proxy-tls\") pod \"machine-config-daemon-rhm2f\" (UID: \"a7cf2cff-ca67-4cc6-99e7-99478ab89af4\") " pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 08:53:05.605583 master-0 kubenswrapper[6976]: I0318 08:53:05.605534 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-rootfs\") pod \"machine-config-daemon-rhm2f\" (UID: \"a7cf2cff-ca67-4cc6-99e7-99478ab89af4\") " pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 08:53:05.605650 master-0 kubenswrapper[6976]: I0318 08:53:05.605628 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhdc2\" (UniqueName: \"kubernetes.io/projected/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-kube-api-access-vhdc2\") pod \"machine-config-daemon-rhm2f\" (UID: \"a7cf2cff-ca67-4cc6-99e7-99478ab89af4\") " pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 08:53:05.605650 master-0 kubenswrapper[6976]: I0318 08:53:05.605639 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-rootfs\") pod \"machine-config-daemon-rhm2f\" (UID: \"a7cf2cff-ca67-4cc6-99e7-99478ab89af4\") " pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 08:53:05.605768 master-0 kubenswrapper[6976]: I0318 08:53:05.605748 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-mcd-auth-proxy-config\") pod \"machine-config-daemon-rhm2f\" (UID: \"a7cf2cff-ca67-4cc6-99e7-99478ab89af4\") " pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 08:53:05.606662 master-0 kubenswrapper[6976]: I0318 08:53:05.606589 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-mcd-auth-proxy-config\") pod \"machine-config-daemon-rhm2f\" (UID: \"a7cf2cff-ca67-4cc6-99e7-99478ab89af4\") " pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 08:53:05.617902 master-0 kubenswrapper[6976]: I0318 08:53:05.617770 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-proxy-tls\") pod \"machine-config-daemon-rhm2f\" (UID: \"a7cf2cff-ca67-4cc6-99e7-99478ab89af4\") " pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 08:53:05.620430 master-0 kubenswrapper[6976]: I0318 08:53:05.620341 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhdc2\" (UniqueName: \"kubernetes.io/projected/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-kube-api-access-vhdc2\") pod \"machine-config-daemon-rhm2f\" (UID: \"a7cf2cff-ca67-4cc6-99e7-99478ab89af4\") " pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 08:53:05.702002 master-0 kubenswrapper[6976]: I0318 08:53:05.701958 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 08:53:05.758329 master-0 kubenswrapper[6976]: I0318 08:53:05.758215 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 08:53:09.071746 master-0 kubenswrapper[6976]: I0318 08:53:09.071538 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678"] Mar 18 08:53:09.200091 master-0 kubenswrapper[6976]: W0318 08:53:09.200036 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7cf2cff_ca67_4cc6_99e7_99478ab89af4.slice/crio-c39b790e4f0dba710e842c418340b16d46173e0451560b3e7fe743c5f356666c WatchSource:0}: Error finding container c39b790e4f0dba710e842c418340b16d46173e0451560b3e7fe743c5f356666c: Status 404 returned error can't find the container with id c39b790e4f0dba710e842c418340b16d46173e0451560b3e7fe743c5f356666c Mar 18 08:53:10.112634 master-0 kubenswrapper[6976]: I0318 08:53:10.112592 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" event={"ID":"e88b021c-c810-4a68-aa48-d8666b52330e","Type":"ContainerStarted","Data":"191e1385839aadfcf8fad00f70dd0c37383e76893667c6d202209b39b27d4f57"} Mar 18 08:53:10.118882 master-0 kubenswrapper[6976]: I0318 08:53:10.118793 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" event={"ID":"a7cf2cff-ca67-4cc6-99e7-99478ab89af4","Type":"ContainerStarted","Data":"30f93c0d7aceb0a61a10f9de69e3d2b23b7f930983a160d5cfa854bf088d353c"} Mar 18 08:53:10.118882 master-0 kubenswrapper[6976]: I0318 08:53:10.118819 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" event={"ID":"a7cf2cff-ca67-4cc6-99e7-99478ab89af4","Type":"ContainerStarted","Data":"f2bc8f8a30f5f898b8e32d4b2badc7d211fdb25d86d18fe7005606b1bfdd7a70"} Mar 18 08:53:10.118882 master-0 kubenswrapper[6976]: I0318 08:53:10.118828 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" event={"ID":"a7cf2cff-ca67-4cc6-99e7-99478ab89af4","Type":"ContainerStarted","Data":"c39b790e4f0dba710e842c418340b16d46173e0451560b3e7fe743c5f356666c"} Mar 18 08:53:10.131935 master-0 kubenswrapper[6976]: I0318 08:53:10.131888 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" event={"ID":"01243eca-2966-40a3-9eeb-fa3edc917717","Type":"ContainerStarted","Data":"7777a1d79f14128c1ec3e26bec66f6050a3c54aab4fef032e4afa313fab7fc66"} Mar 18 08:53:10.131935 master-0 kubenswrapper[6976]: I0318 08:53:10.131931 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" event={"ID":"01243eca-2966-40a3-9eeb-fa3edc917717","Type":"ContainerStarted","Data":"bd7092603130bec3a07549bc35a1bd4eb99757be126618dadbb88ce76a361a16"} Mar 18 08:53:10.145112 master-0 kubenswrapper[6976]: I0318 08:53:10.144407 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" podStartSLOduration=7.954662661 podStartE2EDuration="15.14439124s" podCreationTimestamp="2026-03-18 08:52:55 +0000 UTC" firstStartedPulling="2026-03-18 08:53:02.019368128 +0000 UTC m=+281.604969723" lastFinishedPulling="2026-03-18 08:53:09.209096707 +0000 UTC m=+288.794698302" observedRunningTime="2026-03-18 08:53:10.133918902 +0000 UTC m=+289.719520497" watchObservedRunningTime="2026-03-18 08:53:10.14439124 +0000 UTC m=+289.729992835" Mar 18 08:53:10.156302 master-0 kubenswrapper[6976]: I0318 08:53:10.156243 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" event={"ID":"f918d08d-df7c-4e8d-85ba-1c92d766db16","Type":"ContainerStarted","Data":"bd36fbdc2ea8302bb08039b79171e89aad8c510aa2224a4f4efa1d96761635da"} Mar 18 08:53:10.175598 master-0 kubenswrapper[6976]: I0318 08:53:10.174133 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" event={"ID":"eb8f3615-9e89-4b51-87a2-7d168c81adf3","Type":"ContainerStarted","Data":"7b36832336c0175d2c062b626fd1fa7a6dc659fcd0e0485e0a70c40ae5dfc680"} Mar 18 08:53:10.175598 master-0 kubenswrapper[6976]: I0318 08:53:10.174190 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" event={"ID":"eb8f3615-9e89-4b51-87a2-7d168c81adf3","Type":"ContainerStarted","Data":"0caedbadbfcaeb7785b9d06130fc6e0d2a7ecb9753168035bbf898c397b762cf"} Mar 18 08:53:10.180119 master-0 kubenswrapper[6976]: I0318 08:53:10.180039 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" podStartSLOduration=5.180016984 podStartE2EDuration="5.180016984s" podCreationTimestamp="2026-03-18 08:53:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:53:10.168233622 +0000 UTC m=+289.753835217" watchObservedRunningTime="2026-03-18 08:53:10.180016984 +0000 UTC m=+289.765618579" Mar 18 08:53:10.265588 master-0 kubenswrapper[6976]: I0318 08:53:10.263217 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" podStartSLOduration=5.557112601 podStartE2EDuration="15.263201168s" podCreationTimestamp="2026-03-18 08:52:55 +0000 UTC" firstStartedPulling="2026-03-18 08:52:59.510278287 +0000 UTC m=+279.095879882" lastFinishedPulling="2026-03-18 08:53:09.216366854 +0000 UTC m=+288.801968449" observedRunningTime="2026-03-18 08:53:10.262360397 +0000 UTC m=+289.847962002" watchObservedRunningTime="2026-03-18 08:53:10.263201168 +0000 UTC m=+289.848802763" Mar 18 08:53:10.265925 master-0 kubenswrapper[6976]: I0318 08:53:10.263341 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" event={"ID":"27f3789b-85bc-4a6b-9e1e-43901d680842","Type":"ContainerStarted","Data":"0665cbc35ff059b7c012184a8fbe82d550aedb6c0c518adbc451b7a648f9ca17"} Mar 18 08:53:10.265925 master-0 kubenswrapper[6976]: I0318 08:53:10.263504 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" podUID="27f3789b-85bc-4a6b-9e1e-43901d680842" containerName="kube-rbac-proxy" containerID="cri-o://522f7bc5c6b278c3c5f6254df4edae91677c8c5290845987c8fe09fce0639a5d" gracePeriod=30 Mar 18 08:53:10.266019 master-0 kubenswrapper[6976]: I0318 08:53:10.263710 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" podUID="27f3789b-85bc-4a6b-9e1e-43901d680842" containerName="machine-approver-controller" containerID="cri-o://0665cbc35ff059b7c012184a8fbe82d550aedb6c0c518adbc451b7a648f9ca17" gracePeriod=30 Mar 18 08:53:10.290841 master-0 kubenswrapper[6976]: I0318 08:53:10.288464 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-9f7lz" event={"ID":"2a864188-ada6-4ec2-bf9f-72dab210f0ce","Type":"ContainerStarted","Data":"0dee431f1bab8eafebe24c7c7116af4c82f57849d3fa9f78e391b177e72f8116"} Mar 18 08:53:10.292959 master-0 kubenswrapper[6976]: I0318 08:53:10.292694 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-s98kp" event={"ID":"25781967-12ce-490e-94aa-9b9722f495da","Type":"ContainerStarted","Data":"49a79a26d80521d4a77ceb38753751818ca40b01df46c62b4c6e6cd03feb2aa4"} Mar 18 08:53:10.306321 master-0 kubenswrapper[6976]: I0318 08:53:10.305401 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" podStartSLOduration=5.6285418830000005 podStartE2EDuration="15.30538027s" podCreationTimestamp="2026-03-18 08:52:55 +0000 UTC" firstStartedPulling="2026-03-18 08:52:59.510318448 +0000 UTC m=+279.095920053" lastFinishedPulling="2026-03-18 08:53:09.187156845 +0000 UTC m=+288.772758440" observedRunningTime="2026-03-18 08:53:10.302356163 +0000 UTC m=+289.887957758" watchObservedRunningTime="2026-03-18 08:53:10.30538027 +0000 UTC m=+289.890981865" Mar 18 08:53:10.316267 master-0 kubenswrapper[6976]: I0318 08:53:10.307800 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xjbb5" event={"ID":"3898c28b-69b0-46af-b085-37e12d7d80ba","Type":"ContainerStarted","Data":"8f78f0a3aaaea7a2d324d3c6cd6682b46e8ff5c7c7c26c54704c2010bc56d790"} Mar 18 08:53:10.316267 master-0 kubenswrapper[6976]: I0318 08:53:10.307843 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xjbb5" event={"ID":"3898c28b-69b0-46af-b085-37e12d7d80ba","Type":"ContainerStarted","Data":"d590226a02a8354c7f098bb9ad91a5be87644c667893f71233426cdb8a8b55b9"} Mar 18 08:53:10.386011 master-0 kubenswrapper[6976]: I0318 08:53:10.382682 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-9f7lz" podStartSLOduration=5.213742754 podStartE2EDuration="15.382663493s" podCreationTimestamp="2026-03-18 08:52:55 +0000 UTC" firstStartedPulling="2026-03-18 08:52:59.048273676 +0000 UTC m=+278.633875271" lastFinishedPulling="2026-03-18 08:53:09.217194415 +0000 UTC m=+288.802796010" observedRunningTime="2026-03-18 08:53:10.344082703 +0000 UTC m=+289.929684298" watchObservedRunningTime="2026-03-18 08:53:10.382663493 +0000 UTC m=+289.968265078" Mar 18 08:53:10.386011 master-0 kubenswrapper[6976]: I0318 08:53:10.382950 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-s98kp" podStartSLOduration=5.674855061 podStartE2EDuration="15.38294557s" podCreationTimestamp="2026-03-18 08:52:55 +0000 UTC" firstStartedPulling="2026-03-18 08:52:59.510286747 +0000 UTC m=+279.095888342" lastFinishedPulling="2026-03-18 08:53:09.218377256 +0000 UTC m=+288.803978851" observedRunningTime="2026-03-18 08:53:10.38176313 +0000 UTC m=+289.967364735" watchObservedRunningTime="2026-03-18 08:53:10.38294557 +0000 UTC m=+289.968547155" Mar 18 08:53:10.422670 master-0 kubenswrapper[6976]: I0318 08:53:10.422498 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" podStartSLOduration=5.347738361 podStartE2EDuration="15.422069004s" podCreationTimestamp="2026-03-18 08:52:55 +0000 UTC" firstStartedPulling="2026-03-18 08:52:59.105662628 +0000 UTC m=+278.691264223" lastFinishedPulling="2026-03-18 08:53:09.179993281 +0000 UTC m=+288.765594866" observedRunningTime="2026-03-18 08:53:10.403249521 +0000 UTC m=+289.988851136" watchObservedRunningTime="2026-03-18 08:53:10.422069004 +0000 UTC m=+290.007670599" Mar 18 08:53:10.434118 master-0 kubenswrapper[6976]: I0318 08:53:10.434001 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xjbb5" podStartSLOduration=5.973318599 podStartE2EDuration="15.433941659s" podCreationTimestamp="2026-03-18 08:52:55 +0000 UTC" firstStartedPulling="2026-03-18 08:52:59.760053515 +0000 UTC m=+279.345655110" lastFinishedPulling="2026-03-18 08:53:09.220676575 +0000 UTC m=+288.806278170" observedRunningTime="2026-03-18 08:53:10.417198329 +0000 UTC m=+290.002799924" watchObservedRunningTime="2026-03-18 08:53:10.433941659 +0000 UTC m=+290.019543254" Mar 18 08:53:10.538221 master-0 kubenswrapper[6976]: I0318 08:53:10.538166 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" Mar 18 08:53:10.637551 master-0 kubenswrapper[6976]: I0318 08:53:10.636867 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/27f3789b-85bc-4a6b-9e1e-43901d680842-machine-approver-tls\") pod \"27f3789b-85bc-4a6b-9e1e-43901d680842\" (UID: \"27f3789b-85bc-4a6b-9e1e-43901d680842\") " Mar 18 08:53:10.637551 master-0 kubenswrapper[6976]: I0318 08:53:10.636965 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27f3789b-85bc-4a6b-9e1e-43901d680842-config\") pod \"27f3789b-85bc-4a6b-9e1e-43901d680842\" (UID: \"27f3789b-85bc-4a6b-9e1e-43901d680842\") " Mar 18 08:53:10.637551 master-0 kubenswrapper[6976]: I0318 08:53:10.637008 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/27f3789b-85bc-4a6b-9e1e-43901d680842-auth-proxy-config\") pod \"27f3789b-85bc-4a6b-9e1e-43901d680842\" (UID: \"27f3789b-85bc-4a6b-9e1e-43901d680842\") " Mar 18 08:53:10.637551 master-0 kubenswrapper[6976]: I0318 08:53:10.637048 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpwnv\" (UniqueName: \"kubernetes.io/projected/27f3789b-85bc-4a6b-9e1e-43901d680842-kube-api-access-gpwnv\") pod \"27f3789b-85bc-4a6b-9e1e-43901d680842\" (UID: \"27f3789b-85bc-4a6b-9e1e-43901d680842\") " Mar 18 08:53:10.637784 master-0 kubenswrapper[6976]: I0318 08:53:10.637752 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27f3789b-85bc-4a6b-9e1e-43901d680842-config" (OuterVolumeSpecName: "config") pod "27f3789b-85bc-4a6b-9e1e-43901d680842" (UID: "27f3789b-85bc-4a6b-9e1e-43901d680842"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:53:10.638154 master-0 kubenswrapper[6976]: I0318 08:53:10.638116 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27f3789b-85bc-4a6b-9e1e-43901d680842-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "27f3789b-85bc-4a6b-9e1e-43901d680842" (UID: "27f3789b-85bc-4a6b-9e1e-43901d680842"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:53:10.641254 master-0 kubenswrapper[6976]: I0318 08:53:10.641203 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27f3789b-85bc-4a6b-9e1e-43901d680842-kube-api-access-gpwnv" (OuterVolumeSpecName: "kube-api-access-gpwnv") pod "27f3789b-85bc-4a6b-9e1e-43901d680842" (UID: "27f3789b-85bc-4a6b-9e1e-43901d680842"). InnerVolumeSpecName "kube-api-access-gpwnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:53:10.645718 master-0 kubenswrapper[6976]: I0318 08:53:10.645648 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27f3789b-85bc-4a6b-9e1e-43901d680842-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "27f3789b-85bc-4a6b-9e1e-43901d680842" (UID: "27f3789b-85bc-4a6b-9e1e-43901d680842"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:53:10.738903 master-0 kubenswrapper[6976]: I0318 08:53:10.738584 6976 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/27f3789b-85bc-4a6b-9e1e-43901d680842-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:53:10.738903 master-0 kubenswrapper[6976]: I0318 08:53:10.738707 6976 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpwnv\" (UniqueName: \"kubernetes.io/projected/27f3789b-85bc-4a6b-9e1e-43901d680842-kube-api-access-gpwnv\") on node \"master-0\" DevicePath \"\"" Mar 18 08:53:10.738903 master-0 kubenswrapper[6976]: I0318 08:53:10.738718 6976 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/27f3789b-85bc-4a6b-9e1e-43901d680842-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Mar 18 08:53:10.738903 master-0 kubenswrapper[6976]: I0318 08:53:10.738728 6976 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27f3789b-85bc-4a6b-9e1e-43901d680842-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:53:11.319869 master-0 kubenswrapper[6976]: I0318 08:53:11.319666 6976 generic.go:334] "Generic (PLEG): container finished" podID="27f3789b-85bc-4a6b-9e1e-43901d680842" containerID="0665cbc35ff059b7c012184a8fbe82d550aedb6c0c518adbc451b7a648f9ca17" exitCode=0 Mar 18 08:53:11.319869 master-0 kubenswrapper[6976]: I0318 08:53:11.319741 6976 generic.go:334] "Generic (PLEG): container finished" podID="27f3789b-85bc-4a6b-9e1e-43901d680842" containerID="522f7bc5c6b278c3c5f6254df4edae91677c8c5290845987c8fe09fce0639a5d" exitCode=0 Mar 18 08:53:11.319869 master-0 kubenswrapper[6976]: I0318 08:53:11.319744 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" event={"ID":"27f3789b-85bc-4a6b-9e1e-43901d680842","Type":"ContainerDied","Data":"0665cbc35ff059b7c012184a8fbe82d550aedb6c0c518adbc451b7a648f9ca17"} Mar 18 08:53:11.319869 master-0 kubenswrapper[6976]: I0318 08:53:11.319785 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" event={"ID":"27f3789b-85bc-4a6b-9e1e-43901d680842","Type":"ContainerDied","Data":"522f7bc5c6b278c3c5f6254df4edae91677c8c5290845987c8fe09fce0639a5d"} Mar 18 08:53:11.319869 master-0 kubenswrapper[6976]: I0318 08:53:11.319790 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" Mar 18 08:53:11.324103 master-0 kubenswrapper[6976]: I0318 08:53:11.319961 6976 scope.go:117] "RemoveContainer" containerID="0665cbc35ff059b7c012184a8fbe82d550aedb6c0c518adbc451b7a648f9ca17" Mar 18 08:53:11.324441 master-0 kubenswrapper[6976]: I0318 08:53:11.319797 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678" event={"ID":"27f3789b-85bc-4a6b-9e1e-43901d680842","Type":"ContainerDied","Data":"dca96d10219b28aa237c36e44e279b95cbad3c38e675c6bef808b41a66303034"} Mar 18 08:53:11.331951 master-0 kubenswrapper[6976]: I0318 08:53:11.331761 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" event={"ID":"01243eca-2966-40a3-9eeb-fa3edc917717","Type":"ContainerStarted","Data":"6e72ce9a41e5ca4914523ec4625d65a11abcfde5d826b953a0bb3e17605639e5"} Mar 18 08:53:11.359500 master-0 kubenswrapper[6976]: I0318 08:53:11.359427 6976 scope.go:117] "RemoveContainer" containerID="522f7bc5c6b278c3c5f6254df4edae91677c8c5290845987c8fe09fce0639a5d" Mar 18 08:53:11.366405 master-0 kubenswrapper[6976]: I0318 08:53:11.366316 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" podStartSLOduration=4.896556981 podStartE2EDuration="15.366290367s" podCreationTimestamp="2026-03-18 08:52:56 +0000 UTC" firstStartedPulling="2026-03-18 08:52:58.737917474 +0000 UTC m=+278.323519059" lastFinishedPulling="2026-03-18 08:53:09.20765085 +0000 UTC m=+288.793252445" observedRunningTime="2026-03-18 08:53:11.364528052 +0000 UTC m=+290.950129687" watchObservedRunningTime="2026-03-18 08:53:11.366290367 +0000 UTC m=+290.951891972" Mar 18 08:53:11.388107 master-0 kubenswrapper[6976]: I0318 08:53:11.388043 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678"] Mar 18 08:53:11.391767 master-0 kubenswrapper[6976]: I0318 08:53:11.391697 6976 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-5q678"] Mar 18 08:53:11.398753 master-0 kubenswrapper[6976]: I0318 08:53:11.398667 6976 scope.go:117] "RemoveContainer" containerID="0665cbc35ff059b7c012184a8fbe82d550aedb6c0c518adbc451b7a648f9ca17" Mar 18 08:53:11.399676 master-0 kubenswrapper[6976]: E0318 08:53:11.399369 6976 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0665cbc35ff059b7c012184a8fbe82d550aedb6c0c518adbc451b7a648f9ca17\": container with ID starting with 0665cbc35ff059b7c012184a8fbe82d550aedb6c0c518adbc451b7a648f9ca17 not found: ID does not exist" containerID="0665cbc35ff059b7c012184a8fbe82d550aedb6c0c518adbc451b7a648f9ca17" Mar 18 08:53:11.399676 master-0 kubenswrapper[6976]: I0318 08:53:11.399441 6976 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0665cbc35ff059b7c012184a8fbe82d550aedb6c0c518adbc451b7a648f9ca17"} err="failed to get container status \"0665cbc35ff059b7c012184a8fbe82d550aedb6c0c518adbc451b7a648f9ca17\": rpc error: code = NotFound desc = could not find container \"0665cbc35ff059b7c012184a8fbe82d550aedb6c0c518adbc451b7a648f9ca17\": container with ID starting with 0665cbc35ff059b7c012184a8fbe82d550aedb6c0c518adbc451b7a648f9ca17 not found: ID does not exist" Mar 18 08:53:11.399676 master-0 kubenswrapper[6976]: I0318 08:53:11.399492 6976 scope.go:117] "RemoveContainer" containerID="522f7bc5c6b278c3c5f6254df4edae91677c8c5290845987c8fe09fce0639a5d" Mar 18 08:53:11.403697 master-0 kubenswrapper[6976]: E0318 08:53:11.401087 6976 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"522f7bc5c6b278c3c5f6254df4edae91677c8c5290845987c8fe09fce0639a5d\": container with ID starting with 522f7bc5c6b278c3c5f6254df4edae91677c8c5290845987c8fe09fce0639a5d not found: ID does not exist" containerID="522f7bc5c6b278c3c5f6254df4edae91677c8c5290845987c8fe09fce0639a5d" Mar 18 08:53:11.403697 master-0 kubenswrapper[6976]: I0318 08:53:11.401127 6976 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"522f7bc5c6b278c3c5f6254df4edae91677c8c5290845987c8fe09fce0639a5d"} err="failed to get container status \"522f7bc5c6b278c3c5f6254df4edae91677c8c5290845987c8fe09fce0639a5d\": rpc error: code = NotFound desc = could not find container \"522f7bc5c6b278c3c5f6254df4edae91677c8c5290845987c8fe09fce0639a5d\": container with ID starting with 522f7bc5c6b278c3c5f6254df4edae91677c8c5290845987c8fe09fce0639a5d not found: ID does not exist" Mar 18 08:53:11.403697 master-0 kubenswrapper[6976]: I0318 08:53:11.401151 6976 scope.go:117] "RemoveContainer" containerID="0665cbc35ff059b7c012184a8fbe82d550aedb6c0c518adbc451b7a648f9ca17" Mar 18 08:53:11.403697 master-0 kubenswrapper[6976]: I0318 08:53:11.402744 6976 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0665cbc35ff059b7c012184a8fbe82d550aedb6c0c518adbc451b7a648f9ca17"} err="failed to get container status \"0665cbc35ff059b7c012184a8fbe82d550aedb6c0c518adbc451b7a648f9ca17\": rpc error: code = NotFound desc = could not find container \"0665cbc35ff059b7c012184a8fbe82d550aedb6c0c518adbc451b7a648f9ca17\": container with ID starting with 0665cbc35ff059b7c012184a8fbe82d550aedb6c0c518adbc451b7a648f9ca17 not found: ID does not exist" Mar 18 08:53:11.403697 master-0 kubenswrapper[6976]: I0318 08:53:11.402789 6976 scope.go:117] "RemoveContainer" containerID="522f7bc5c6b278c3c5f6254df4edae91677c8c5290845987c8fe09fce0639a5d" Mar 18 08:53:11.406795 master-0 kubenswrapper[6976]: I0318 08:53:11.406724 6976 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"522f7bc5c6b278c3c5f6254df4edae91677c8c5290845987c8fe09fce0639a5d"} err="failed to get container status \"522f7bc5c6b278c3c5f6254df4edae91677c8c5290845987c8fe09fce0639a5d\": rpc error: code = NotFound desc = could not find container \"522f7bc5c6b278c3c5f6254df4edae91677c8c5290845987c8fe09fce0639a5d\": container with ID starting with 522f7bc5c6b278c3c5f6254df4edae91677c8c5290845987c8fe09fce0639a5d not found: ID does not exist" Mar 18 08:53:11.422540 master-0 kubenswrapper[6976]: I0318 08:53:11.422342 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6"] Mar 18 08:53:11.422792 master-0 kubenswrapper[6976]: E0318 08:53:11.422680 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f3789b-85bc-4a6b-9e1e-43901d680842" containerName="machine-approver-controller" Mar 18 08:53:11.422792 master-0 kubenswrapper[6976]: I0318 08:53:11.422771 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f3789b-85bc-4a6b-9e1e-43901d680842" containerName="machine-approver-controller" Mar 18 08:53:11.422903 master-0 kubenswrapper[6976]: E0318 08:53:11.422796 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27f3789b-85bc-4a6b-9e1e-43901d680842" containerName="kube-rbac-proxy" Mar 18 08:53:11.422903 master-0 kubenswrapper[6976]: I0318 08:53:11.422811 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="27f3789b-85bc-4a6b-9e1e-43901d680842" containerName="kube-rbac-proxy" Mar 18 08:53:11.423055 master-0 kubenswrapper[6976]: I0318 08:53:11.422999 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="27f3789b-85bc-4a6b-9e1e-43901d680842" containerName="kube-rbac-proxy" Mar 18 08:53:11.423055 master-0 kubenswrapper[6976]: I0318 08:53:11.423033 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="27f3789b-85bc-4a6b-9e1e-43901d680842" containerName="machine-approver-controller" Mar 18 08:53:11.424047 master-0 kubenswrapper[6976]: I0318 08:53:11.424007 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 08:53:11.427350 master-0 kubenswrapper[6976]: I0318 08:53:11.427297 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 08:53:11.427723 master-0 kubenswrapper[6976]: I0318 08:53:11.427689 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-vvwvf" Mar 18 08:53:11.427938 master-0 kubenswrapper[6976]: I0318 08:53:11.427909 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 08:53:11.429191 master-0 kubenswrapper[6976]: I0318 08:53:11.429160 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 08:53:11.430071 master-0 kubenswrapper[6976]: I0318 08:53:11.430004 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 08:53:11.431918 master-0 kubenswrapper[6976]: I0318 08:53:11.431881 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 08:53:11.509132 master-0 kubenswrapper[6976]: I0318 08:53:11.508950 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/cdcd27a4-6d46-47af-a14a-65f6501c10f0-machine-approver-tls\") pod \"machine-approver-5c6485487f-r4mv6\" (UID: \"cdcd27a4-6d46-47af-a14a-65f6501c10f0\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 08:53:11.509132 master-0 kubenswrapper[6976]: I0318 08:53:11.509074 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfrbj\" (UniqueName: \"kubernetes.io/projected/cdcd27a4-6d46-47af-a14a-65f6501c10f0-kube-api-access-dfrbj\") pod \"machine-approver-5c6485487f-r4mv6\" (UID: \"cdcd27a4-6d46-47af-a14a-65f6501c10f0\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 08:53:11.509453 master-0 kubenswrapper[6976]: I0318 08:53:11.509143 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cdcd27a4-6d46-47af-a14a-65f6501c10f0-auth-proxy-config\") pod \"machine-approver-5c6485487f-r4mv6\" (UID: \"cdcd27a4-6d46-47af-a14a-65f6501c10f0\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 08:53:11.509453 master-0 kubenswrapper[6976]: I0318 08:53:11.509216 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdcd27a4-6d46-47af-a14a-65f6501c10f0-config\") pod \"machine-approver-5c6485487f-r4mv6\" (UID: \"cdcd27a4-6d46-47af-a14a-65f6501c10f0\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 08:53:11.611298 master-0 kubenswrapper[6976]: I0318 08:53:11.610882 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/cdcd27a4-6d46-47af-a14a-65f6501c10f0-machine-approver-tls\") pod \"machine-approver-5c6485487f-r4mv6\" (UID: \"cdcd27a4-6d46-47af-a14a-65f6501c10f0\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 08:53:11.611298 master-0 kubenswrapper[6976]: I0318 08:53:11.611014 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfrbj\" (UniqueName: \"kubernetes.io/projected/cdcd27a4-6d46-47af-a14a-65f6501c10f0-kube-api-access-dfrbj\") pod \"machine-approver-5c6485487f-r4mv6\" (UID: \"cdcd27a4-6d46-47af-a14a-65f6501c10f0\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 08:53:11.611298 master-0 kubenswrapper[6976]: I0318 08:53:11.611062 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cdcd27a4-6d46-47af-a14a-65f6501c10f0-auth-proxy-config\") pod \"machine-approver-5c6485487f-r4mv6\" (UID: \"cdcd27a4-6d46-47af-a14a-65f6501c10f0\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 08:53:11.611298 master-0 kubenswrapper[6976]: I0318 08:53:11.611121 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdcd27a4-6d46-47af-a14a-65f6501c10f0-config\") pod \"machine-approver-5c6485487f-r4mv6\" (UID: \"cdcd27a4-6d46-47af-a14a-65f6501c10f0\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 08:53:11.614796 master-0 kubenswrapper[6976]: I0318 08:53:11.613720 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cdcd27a4-6d46-47af-a14a-65f6501c10f0-auth-proxy-config\") pod \"machine-approver-5c6485487f-r4mv6\" (UID: \"cdcd27a4-6d46-47af-a14a-65f6501c10f0\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 08:53:11.614796 master-0 kubenswrapper[6976]: I0318 08:53:11.613956 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdcd27a4-6d46-47af-a14a-65f6501c10f0-config\") pod \"machine-approver-5c6485487f-r4mv6\" (UID: \"cdcd27a4-6d46-47af-a14a-65f6501c10f0\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 08:53:11.618212 master-0 kubenswrapper[6976]: I0318 08:53:11.616952 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/cdcd27a4-6d46-47af-a14a-65f6501c10f0-machine-approver-tls\") pod \"machine-approver-5c6485487f-r4mv6\" (UID: \"cdcd27a4-6d46-47af-a14a-65f6501c10f0\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 08:53:11.642249 master-0 kubenswrapper[6976]: I0318 08:53:11.642203 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfrbj\" (UniqueName: \"kubernetes.io/projected/cdcd27a4-6d46-47af-a14a-65f6501c10f0-kube-api-access-dfrbj\") pod \"machine-approver-5c6485487f-r4mv6\" (UID: \"cdcd27a4-6d46-47af-a14a-65f6501c10f0\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 08:53:11.746447 master-0 kubenswrapper[6976]: I0318 08:53:11.746399 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 08:53:11.791337 master-0 kubenswrapper[6976]: W0318 08:53:11.791277 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdcd27a4_6d46_47af_a14a_65f6501c10f0.slice/crio-aea03d504ef2f838af66f123ab31966d30cbe948b0b47dc0feb84acc63bbf656 WatchSource:0}: Error finding container aea03d504ef2f838af66f123ab31966d30cbe948b0b47dc0feb84acc63bbf656: Status 404 returned error can't find the container with id aea03d504ef2f838af66f123ab31966d30cbe948b0b47dc0feb84acc63bbf656 Mar 18 08:53:12.337275 master-0 kubenswrapper[6976]: I0318 08:53:12.337213 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" event={"ID":"cdcd27a4-6d46-47af-a14a-65f6501c10f0","Type":"ContainerStarted","Data":"ca74e483ee5f7795ddd4a19b8dedb0099339c33aeba4c489fb33f3fdb2d038a6"} Mar 18 08:53:12.337275 master-0 kubenswrapper[6976]: I0318 08:53:12.337266 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" event={"ID":"cdcd27a4-6d46-47af-a14a-65f6501c10f0","Type":"ContainerStarted","Data":"81903817218ef799d0c11ebd26d624efb8273f70d2732972eee7b85e873d1ac4"} Mar 18 08:53:12.337275 master-0 kubenswrapper[6976]: I0318 08:53:12.337280 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" event={"ID":"cdcd27a4-6d46-47af-a14a-65f6501c10f0","Type":"ContainerStarted","Data":"aea03d504ef2f838af66f123ab31966d30cbe948b0b47dc0feb84acc63bbf656"} Mar 18 08:53:12.614600 master-0 kubenswrapper[6976]: I0318 08:53:12.614480 6976 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27f3789b-85bc-4a6b-9e1e-43901d680842" path="/var/lib/kubelet/pods/27f3789b-85bc-4a6b-9e1e-43901d680842/volumes" Mar 18 08:53:12.615363 master-0 kubenswrapper[6976]: I0318 08:53:12.614958 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd"] Mar 18 08:53:12.615736 master-0 kubenswrapper[6976]: I0318 08:53:12.615696 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" Mar 18 08:53:12.619717 master-0 kubenswrapper[6976]: I0318 08:53:12.619477 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 18 08:53:12.619717 master-0 kubenswrapper[6976]: I0318 08:53:12.619630 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-l4xp6" Mar 18 08:53:12.620888 master-0 kubenswrapper[6976]: I0318 08:53:12.620852 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd"] Mar 18 08:53:12.840599 master-0 kubenswrapper[6976]: I0318 08:53:12.839126 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d7205eeb-912b-4c31-b08f-ed0b2a1319aa-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-prrnd\" (UID: \"d7205eeb-912b-4c31-b08f-ed0b2a1319aa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" Mar 18 08:53:12.840599 master-0 kubenswrapper[6976]: I0318 08:53:12.839258 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d7205eeb-912b-4c31-b08f-ed0b2a1319aa-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-prrnd\" (UID: \"d7205eeb-912b-4c31-b08f-ed0b2a1319aa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" Mar 18 08:53:12.840599 master-0 kubenswrapper[6976]: I0318 08:53:12.839325 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddsnb\" (UniqueName: \"kubernetes.io/projected/d7205eeb-912b-4c31-b08f-ed0b2a1319aa-kube-api-access-ddsnb\") pod \"machine-config-controller-b4f87c5b9-prrnd\" (UID: \"d7205eeb-912b-4c31-b08f-ed0b2a1319aa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" Mar 18 08:53:12.952781 master-0 kubenswrapper[6976]: I0318 08:53:12.950597 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d7205eeb-912b-4c31-b08f-ed0b2a1319aa-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-prrnd\" (UID: \"d7205eeb-912b-4c31-b08f-ed0b2a1319aa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" Mar 18 08:53:12.952781 master-0 kubenswrapper[6976]: I0318 08:53:12.950676 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddsnb\" (UniqueName: \"kubernetes.io/projected/d7205eeb-912b-4c31-b08f-ed0b2a1319aa-kube-api-access-ddsnb\") pod \"machine-config-controller-b4f87c5b9-prrnd\" (UID: \"d7205eeb-912b-4c31-b08f-ed0b2a1319aa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" Mar 18 08:53:12.952781 master-0 kubenswrapper[6976]: I0318 08:53:12.950800 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d7205eeb-912b-4c31-b08f-ed0b2a1319aa-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-prrnd\" (UID: \"d7205eeb-912b-4c31-b08f-ed0b2a1319aa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" Mar 18 08:53:12.952781 master-0 kubenswrapper[6976]: I0318 08:53:12.951806 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d7205eeb-912b-4c31-b08f-ed0b2a1319aa-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-prrnd\" (UID: \"d7205eeb-912b-4c31-b08f-ed0b2a1319aa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" Mar 18 08:53:12.974339 master-0 kubenswrapper[6976]: I0318 08:53:12.967767 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d7205eeb-912b-4c31-b08f-ed0b2a1319aa-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-prrnd\" (UID: \"d7205eeb-912b-4c31-b08f-ed0b2a1319aa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" Mar 18 08:53:12.998895 master-0 kubenswrapper[6976]: I0318 08:53:12.998836 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddsnb\" (UniqueName: \"kubernetes.io/projected/d7205eeb-912b-4c31-b08f-ed0b2a1319aa-kube-api-access-ddsnb\") pod \"machine-config-controller-b4f87c5b9-prrnd\" (UID: \"d7205eeb-912b-4c31-b08f-ed0b2a1319aa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" Mar 18 08:53:13.292926 master-0 kubenswrapper[6976]: I0318 08:53:13.292851 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" Mar 18 08:53:13.366548 master-0 kubenswrapper[6976]: I0318 08:53:13.366461 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" podStartSLOduration=2.366437698 podStartE2EDuration="2.366437698s" podCreationTimestamp="2026-03-18 08:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:53:13.363015881 +0000 UTC m=+292.948617476" watchObservedRunningTime="2026-03-18 08:53:13.366437698 +0000 UTC m=+292.952039303" Mar 18 08:53:13.708302 master-0 kubenswrapper[6976]: I0318 08:53:13.708254 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd"] Mar 18 08:53:13.993985 master-0 kubenswrapper[6976]: I0318 08:53:13.993856 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-7zvkl"] Mar 18 08:53:13.994480 master-0 kubenswrapper[6976]: I0318 08:53:13.994448 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-7zvkl" Mar 18 08:53:14.141449 master-0 kubenswrapper[6976]: I0318 08:53:14.000621 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-7dcf5569b5-sgsmn"] Mar 18 08:53:14.141449 master-0 kubenswrapper[6976]: I0318 08:53:14.001465 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:53:14.141449 master-0 kubenswrapper[6976]: I0318 08:53:14.005240 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 18 08:53:14.141449 master-0 kubenswrapper[6976]: I0318 08:53:14.005417 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 18 08:53:14.141449 master-0 kubenswrapper[6976]: I0318 08:53:14.005548 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 18 08:53:14.141449 master-0 kubenswrapper[6976]: I0318 08:53:14.005714 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 18 08:53:14.141449 master-0 kubenswrapper[6976]: I0318 08:53:14.005806 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 18 08:53:14.141449 master-0 kubenswrapper[6976]: I0318 08:53:14.005885 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 18 08:53:14.141449 master-0 kubenswrapper[6976]: I0318 08:53:14.006254 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4jrzp"] Mar 18 08:53:14.141449 master-0 kubenswrapper[6976]: I0318 08:53:14.006788 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4jrzp" Mar 18 08:53:14.141449 master-0 kubenswrapper[6976]: I0318 08:53:14.009820 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-s4fhp" Mar 18 08:53:14.141449 master-0 kubenswrapper[6976]: I0318 08:53:14.018058 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 18 08:53:14.149485 master-0 kubenswrapper[6976]: I0318 08:53:14.149434 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-7zvkl"] Mar 18 08:53:14.166514 master-0 kubenswrapper[6976]: I0318 08:53:14.166469 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93cb5ef1-e8f1-4d11-8c93-1abf24626176-service-ca-bundle\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:53:14.166514 master-0 kubenswrapper[6976]: I0318 08:53:14.166511 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd8zs\" (UniqueName: \"kubernetes.io/projected/17b1447b-1659-405b-81e0-21f0cf3e7a2c-kube-api-access-rd8zs\") pod \"network-check-source-b4bf74f6-7zvkl\" (UID: \"17b1447b-1659-405b-81e0-21f0cf3e7a2c\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-7zvkl" Mar 18 08:53:14.166605 master-0 kubenswrapper[6976]: I0318 08:53:14.166538 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93cb5ef1-e8f1-4d11-8c93-1abf24626176-metrics-certs\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:53:14.166647 master-0 kubenswrapper[6976]: I0318 08:53:14.166606 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/93cb5ef1-e8f1-4d11-8c93-1abf24626176-default-certificate\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:53:14.166647 master-0 kubenswrapper[6976]: I0318 08:53:14.166632 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/cdf1c657-a9dc-455a-b2fd-27a518bc5199-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-4jrzp\" (UID: \"cdf1c657-a9dc-455a-b2fd-27a518bc5199\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4jrzp" Mar 18 08:53:14.166699 master-0 kubenswrapper[6976]: I0318 08:53:14.166662 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt64s\" (UniqueName: \"kubernetes.io/projected/93cb5ef1-e8f1-4d11-8c93-1abf24626176-kube-api-access-xt64s\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:53:14.166699 master-0 kubenswrapper[6976]: I0318 08:53:14.166688 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/93cb5ef1-e8f1-4d11-8c93-1abf24626176-stats-auth\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:53:14.267752 master-0 kubenswrapper[6976]: I0318 08:53:14.267602 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/93cb5ef1-e8f1-4d11-8c93-1abf24626176-stats-auth\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:53:14.267752 master-0 kubenswrapper[6976]: I0318 08:53:14.267677 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93cb5ef1-e8f1-4d11-8c93-1abf24626176-service-ca-bundle\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:53:14.267752 master-0 kubenswrapper[6976]: I0318 08:53:14.267711 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd8zs\" (UniqueName: \"kubernetes.io/projected/17b1447b-1659-405b-81e0-21f0cf3e7a2c-kube-api-access-rd8zs\") pod \"network-check-source-b4bf74f6-7zvkl\" (UID: \"17b1447b-1659-405b-81e0-21f0cf3e7a2c\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-7zvkl" Mar 18 08:53:14.267752 master-0 kubenswrapper[6976]: I0318 08:53:14.267744 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93cb5ef1-e8f1-4d11-8c93-1abf24626176-metrics-certs\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:53:14.268076 master-0 kubenswrapper[6976]: I0318 08:53:14.267791 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/93cb5ef1-e8f1-4d11-8c93-1abf24626176-default-certificate\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:53:14.268076 master-0 kubenswrapper[6976]: I0318 08:53:14.267809 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/cdf1c657-a9dc-455a-b2fd-27a518bc5199-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-4jrzp\" (UID: \"cdf1c657-a9dc-455a-b2fd-27a518bc5199\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4jrzp" Mar 18 08:53:14.268076 master-0 kubenswrapper[6976]: I0318 08:53:14.267839 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt64s\" (UniqueName: \"kubernetes.io/projected/93cb5ef1-e8f1-4d11-8c93-1abf24626176-kube-api-access-xt64s\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:53:14.271924 master-0 kubenswrapper[6976]: I0318 08:53:14.271885 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/93cb5ef1-e8f1-4d11-8c93-1abf24626176-stats-auth\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:53:14.272503 master-0 kubenswrapper[6976]: I0318 08:53:14.272470 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93cb5ef1-e8f1-4d11-8c93-1abf24626176-service-ca-bundle\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:53:14.275889 master-0 kubenswrapper[6976]: I0318 08:53:14.275846 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93cb5ef1-e8f1-4d11-8c93-1abf24626176-metrics-certs\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:53:14.278423 master-0 kubenswrapper[6976]: I0318 08:53:14.278377 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/93cb5ef1-e8f1-4d11-8c93-1abf24626176-default-certificate\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:53:14.281188 master-0 kubenswrapper[6976]: I0318 08:53:14.281144 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/cdf1c657-a9dc-455a-b2fd-27a518bc5199-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-4jrzp\" (UID: \"cdf1c657-a9dc-455a-b2fd-27a518bc5199\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4jrzp" Mar 18 08:53:14.685776 master-0 kubenswrapper[6976]: I0318 08:53:14.685739 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4jrzp" Mar 18 08:53:14.688107 master-0 kubenswrapper[6976]: I0318 08:53:14.687820 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" event={"ID":"d7205eeb-912b-4c31-b08f-ed0b2a1319aa","Type":"ContainerStarted","Data":"387c4f86f3eda72de99fb349bfcfdfee8bbde3da963ee173fb1f57ebdf887390"} Mar 18 08:53:14.688107 master-0 kubenswrapper[6976]: I0318 08:53:14.687881 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" event={"ID":"d7205eeb-912b-4c31-b08f-ed0b2a1319aa","Type":"ContainerStarted","Data":"50fd77676f2fb32890abad0222ed7ebdb08546cdf39f1ddb90ccc00d539b7f06"} Mar 18 08:53:14.688107 master-0 kubenswrapper[6976]: I0318 08:53:14.687891 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" event={"ID":"d7205eeb-912b-4c31-b08f-ed0b2a1319aa","Type":"ContainerStarted","Data":"74b42a82fad4fc08801bc253d1dad3a48f5984717f93c0a00de7af542db7236a"} Mar 18 08:53:15.142362 master-0 kubenswrapper[6976]: I0318 08:53:15.141196 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4jrzp"] Mar 18 08:53:15.212454 master-0 kubenswrapper[6976]: I0318 08:53:15.202039 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd8zs\" (UniqueName: \"kubernetes.io/projected/17b1447b-1659-405b-81e0-21f0cf3e7a2c-kube-api-access-rd8zs\") pod \"network-check-source-b4bf74f6-7zvkl\" (UID: \"17b1447b-1659-405b-81e0-21f0cf3e7a2c\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-7zvkl" Mar 18 08:53:15.212454 master-0 kubenswrapper[6976]: I0318 08:53:15.212317 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt64s\" (UniqueName: \"kubernetes.io/projected/93cb5ef1-e8f1-4d11-8c93-1abf24626176-kube-api-access-xt64s\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:53:15.219814 master-0 kubenswrapper[6976]: I0318 08:53:15.218504 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" podStartSLOduration=3.21848255 podStartE2EDuration="3.21848255s" podCreationTimestamp="2026-03-18 08:53:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:53:15.215699259 +0000 UTC m=+294.801300864" watchObservedRunningTime="2026-03-18 08:53:15.21848255 +0000 UTC m=+294.804084155" Mar 18 08:53:15.227764 master-0 kubenswrapper[6976]: I0318 08:53:15.218681 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-7zvkl" Mar 18 08:53:15.276925 master-0 kubenswrapper[6976]: I0318 08:53:15.276877 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:53:15.397600 master-0 kubenswrapper[6976]: I0318 08:53:15.397374 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv"] Mar 18 08:53:15.397848 master-0 kubenswrapper[6976]: I0318 08:53:15.397766 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" podUID="01243eca-2966-40a3-9eeb-fa3edc917717" containerName="cluster-cloud-controller-manager" containerID="cri-o://bd7092603130bec3a07549bc35a1bd4eb99757be126618dadbb88ce76a361a16" gracePeriod=30 Mar 18 08:53:15.397848 master-0 kubenswrapper[6976]: I0318 08:53:15.397813 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" podUID="01243eca-2966-40a3-9eeb-fa3edc917717" containerName="kube-rbac-proxy" containerID="cri-o://6e72ce9a41e5ca4914523ec4625d65a11abcfde5d826b953a0bb3e17605639e5" gracePeriod=30 Mar 18 08:53:15.397996 master-0 kubenswrapper[6976]: I0318 08:53:15.397926 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" podUID="01243eca-2966-40a3-9eeb-fa3edc917717" containerName="config-sync-controllers" containerID="cri-o://7777a1d79f14128c1ec3e26bec66f6050a3c54aab4fef032e4afa313fab7fc66" gracePeriod=30 Mar 18 08:53:16.045497 master-0 kubenswrapper[6976]: I0318 08:53:16.045416 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4jrzp"] Mar 18 08:53:16.085043 master-0 kubenswrapper[6976]: I0318 08:53:16.085016 6976 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 08:53:17.859610 master-0 kubenswrapper[6976]: I0318 08:53:17.859556 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-j75sc_e86268c9-7a83-4ccb-979a-feff00cb4b3e/authentication-operator/1.log" Mar 18 08:53:18.057756 master-0 kubenswrapper[6976]: I0318 08:53:18.057699 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-j75sc_e86268c9-7a83-4ccb-979a-feff00cb4b3e/authentication-operator/2.log" Mar 18 08:53:18.455640 master-0 kubenswrapper[6976]: I0318 08:53:18.455492 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-6ff67f5cc6-vg6s9_15b6612f-3a51-4a67-a566-8c520f85c6c2/fix-audit-permissions/0.log" Mar 18 08:53:18.512325 master-0 kubenswrapper[6976]: I0318 08:53:18.512261 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-rw7hw"] Mar 18 08:53:18.512935 master-0 kubenswrapper[6976]: I0318 08:53:18.512909 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rw7hw" Mar 18 08:53:18.514812 master-0 kubenswrapper[6976]: I0318 08:53:18.514778 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 18 08:53:18.514866 master-0 kubenswrapper[6976]: I0318 08:53:18.514821 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-nvh22" Mar 18 08:53:18.515075 master-0 kubenswrapper[6976]: I0318 08:53:18.515052 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 18 08:53:18.539512 master-0 kubenswrapper[6976]: I0318 08:53:18.539457 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/14489ef7-8df3-4a3b-a137-3a78e89d425b-node-bootstrap-token\") pod \"machine-config-server-rw7hw\" (UID: \"14489ef7-8df3-4a3b-a137-3a78e89d425b\") " pod="openshift-machine-config-operator/machine-config-server-rw7hw" Mar 18 08:53:18.539734 master-0 kubenswrapper[6976]: I0318 08:53:18.539540 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/14489ef7-8df3-4a3b-a137-3a78e89d425b-certs\") pod \"machine-config-server-rw7hw\" (UID: \"14489ef7-8df3-4a3b-a137-3a78e89d425b\") " pod="openshift-machine-config-operator/machine-config-server-rw7hw" Mar 18 08:53:18.539734 master-0 kubenswrapper[6976]: I0318 08:53:18.539637 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n76wp\" (UniqueName: \"kubernetes.io/projected/14489ef7-8df3-4a3b-a137-3a78e89d425b-kube-api-access-n76wp\") pod \"machine-config-server-rw7hw\" (UID: \"14489ef7-8df3-4a3b-a137-3a78e89d425b\") " pod="openshift-machine-config-operator/machine-config-server-rw7hw" Mar 18 08:53:18.640957 master-0 kubenswrapper[6976]: I0318 08:53:18.640905 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n76wp\" (UniqueName: \"kubernetes.io/projected/14489ef7-8df3-4a3b-a137-3a78e89d425b-kube-api-access-n76wp\") pod \"machine-config-server-rw7hw\" (UID: \"14489ef7-8df3-4a3b-a137-3a78e89d425b\") " pod="openshift-machine-config-operator/machine-config-server-rw7hw" Mar 18 08:53:18.641130 master-0 kubenswrapper[6976]: I0318 08:53:18.640997 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/14489ef7-8df3-4a3b-a137-3a78e89d425b-node-bootstrap-token\") pod \"machine-config-server-rw7hw\" (UID: \"14489ef7-8df3-4a3b-a137-3a78e89d425b\") " pod="openshift-machine-config-operator/machine-config-server-rw7hw" Mar 18 08:53:18.641130 master-0 kubenswrapper[6976]: I0318 08:53:18.641067 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/14489ef7-8df3-4a3b-a137-3a78e89d425b-certs\") pod \"machine-config-server-rw7hw\" (UID: \"14489ef7-8df3-4a3b-a137-3a78e89d425b\") " pod="openshift-machine-config-operator/machine-config-server-rw7hw" Mar 18 08:53:18.644638 master-0 kubenswrapper[6976]: I0318 08:53:18.644581 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/14489ef7-8df3-4a3b-a137-3a78e89d425b-certs\") pod \"machine-config-server-rw7hw\" (UID: \"14489ef7-8df3-4a3b-a137-3a78e89d425b\") " pod="openshift-machine-config-operator/machine-config-server-rw7hw" Mar 18 08:53:18.645790 master-0 kubenswrapper[6976]: I0318 08:53:18.645765 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/14489ef7-8df3-4a3b-a137-3a78e89d425b-node-bootstrap-token\") pod \"machine-config-server-rw7hw\" (UID: \"14489ef7-8df3-4a3b-a137-3a78e89d425b\") " pod="openshift-machine-config-operator/machine-config-server-rw7hw" Mar 18 08:53:18.659877 master-0 kubenswrapper[6976]: I0318 08:53:18.659821 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-6ff67f5cc6-vg6s9_15b6612f-3a51-4a67-a566-8c520f85c6c2/oauth-apiserver/0.log" Mar 18 08:53:18.660662 master-0 kubenswrapper[6976]: I0318 08:53:18.660627 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n76wp\" (UniqueName: \"kubernetes.io/projected/14489ef7-8df3-4a3b-a137-3a78e89d425b-kube-api-access-n76wp\") pod \"machine-config-server-rw7hw\" (UID: \"14489ef7-8df3-4a3b-a137-3a78e89d425b\") " pod="openshift-machine-config-operator/machine-config-server-rw7hw" Mar 18 08:53:18.831302 master-0 kubenswrapper[6976]: I0318 08:53:18.831187 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-rw7hw" Mar 18 08:53:18.857654 master-0 kubenswrapper[6976]: I0318 08:53:18.857613 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-tx2pv_e88b021c-c810-4a68-aa48-d8666b52330e/kube-rbac-proxy/0.log" Mar 18 08:53:19.754992 master-0 kubenswrapper[6976]: I0318 08:53:19.754932 6976 generic.go:334] "Generic (PLEG): container finished" podID="01243eca-2966-40a3-9eeb-fa3edc917717" containerID="6e72ce9a41e5ca4914523ec4625d65a11abcfde5d826b953a0bb3e17605639e5" exitCode=0 Mar 18 08:53:19.754992 master-0 kubenswrapper[6976]: I0318 08:53:19.754965 6976 generic.go:334] "Generic (PLEG): container finished" podID="01243eca-2966-40a3-9eeb-fa3edc917717" containerID="7777a1d79f14128c1ec3e26bec66f6050a3c54aab4fef032e4afa313fab7fc66" exitCode=0 Mar 18 08:53:19.754992 master-0 kubenswrapper[6976]: I0318 08:53:19.754972 6976 generic.go:334] "Generic (PLEG): container finished" podID="01243eca-2966-40a3-9eeb-fa3edc917717" containerID="bd7092603130bec3a07549bc35a1bd4eb99757be126618dadbb88ce76a361a16" exitCode=0 Mar 18 08:53:19.754992 master-0 kubenswrapper[6976]: I0318 08:53:19.754988 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" event={"ID":"01243eca-2966-40a3-9eeb-fa3edc917717","Type":"ContainerDied","Data":"6e72ce9a41e5ca4914523ec4625d65a11abcfde5d826b953a0bb3e17605639e5"} Mar 18 08:53:19.755665 master-0 kubenswrapper[6976]: I0318 08:53:19.755014 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" event={"ID":"01243eca-2966-40a3-9eeb-fa3edc917717","Type":"ContainerDied","Data":"7777a1d79f14128c1ec3e26bec66f6050a3c54aab4fef032e4afa313fab7fc66"} Mar 18 08:53:19.755665 master-0 kubenswrapper[6976]: I0318 08:53:19.755025 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" event={"ID":"01243eca-2966-40a3-9eeb-fa3edc917717","Type":"ContainerDied","Data":"bd7092603130bec3a07549bc35a1bd4eb99757be126618dadbb88ce76a361a16"} Mar 18 08:53:19.964787 master-0 kubenswrapper[6976]: I0318 08:53:19.961038 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-tx2pv_e88b021c-c810-4a68-aa48-d8666b52330e/cluster-autoscaler-operator/0.log" Mar 18 08:53:19.989651 master-0 kubenswrapper[6976]: I0318 08:53:19.979987 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mcd6d_eb8f3615-9e89-4b51-87a2-7d168c81adf3/cluster-baremetal-operator/0.log" Mar 18 08:53:19.989651 master-0 kubenswrapper[6976]: I0318 08:53:19.986061 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mcd6d_eb8f3615-9e89-4b51-87a2-7d168c81adf3/baremetal-kube-rbac-proxy/0.log" Mar 18 08:53:19.990906 master-0 kubenswrapper[6976]: I0318 08:53:19.990875 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-s98kp_25781967-12ce-490e-94aa-9b9722f495da/control-plane-machine-set-operator/0.log" Mar 18 08:53:20.260887 master-0 kubenswrapper[6976]: I0318 08:53:20.260832 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-f2nfl_bb6ef4c4-bff3-4559-8e42-582bbd668b7c/etcd-operator/0.log" Mar 18 08:53:20.456457 master-0 kubenswrapper[6976]: I0318 08:53:20.456399 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-f2nfl_bb6ef4c4-bff3-4559-8e42-582bbd668b7c/etcd-operator/1.log" Mar 18 08:53:20.661296 master-0 kubenswrapper[6976]: I0318 08:53:20.661134 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_c393a935-1821-4742-b1bb-0ee52ada5434/installer/0.log" Mar 18 08:53:20.695654 master-0 kubenswrapper[6976]: I0318 08:53:20.695605 6976 scope.go:117] "RemoveContainer" containerID="6af9b3db51dc2800e23bac1d32175e8ad4a26ab1ee574f2d956ea30888e63922" Mar 18 08:53:20.976304 master-0 kubenswrapper[6976]: I0318 08:53:20.976234 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-pp4r9_65cff83a-8d8f-4e4f-96ef-99941c29ba53/kube-apiserver-operator/1.log" Mar 18 08:53:21.064651 master-0 kubenswrapper[6976]: I0318 08:53:21.064437 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-pp4r9_65cff83a-8d8f-4e4f-96ef-99941c29ba53/kube-apiserver-operator/2.log" Mar 18 08:53:21.256269 master-0 kubenswrapper[6976]: I0318 08:53:21.256066 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/setup/0.log" Mar 18 08:53:21.462194 master-0 kubenswrapper[6976]: I0318 08:53:21.462138 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/kube-apiserver/0.log" Mar 18 08:53:21.657037 master-0 kubenswrapper[6976]: I0318 08:53:21.656938 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/kube-apiserver-insecure-readyz/0.log" Mar 18 08:53:21.858522 master-0 kubenswrapper[6976]: I0318 08:53:21.858466 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_38b830ff-8938-4f21-8977-c29a19c85afb/installer/0.log" Mar 18 08:53:22.058541 master-0 kubenswrapper[6976]: I0318 08:53:22.058497 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_b75d3625-4131-465d-a8e2-4c42588c7630/installer/0.log" Mar 18 08:53:22.255887 master-0 kubenswrapper[6976]: I0318 08:53:22.255732 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-xlfrc_1df9560e-21f0-44fe-bb51-4bc0fde4a3ac/kube-controller-manager-operator/1.log" Mar 18 08:53:22.458576 master-0 kubenswrapper[6976]: I0318 08:53:22.458516 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-xlfrc_1df9560e-21f0-44fe-bb51-4bc0fde4a3ac/kube-controller-manager-operator/2.log" Mar 18 08:53:23.335686 master-0 kubenswrapper[6976]: I0318 08:53:23.333616 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_46f265536aba6292ead501bc9b49f327/kube-controller-manager/3.log" Mar 18 08:53:23.346877 master-0 kubenswrapper[6976]: I0318 08:53:23.346820 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_46f265536aba6292ead501bc9b49f327/kube-controller-manager/4.log" Mar 18 08:53:23.359050 master-0 kubenswrapper[6976]: I0318 08:53:23.359006 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_46f265536aba6292ead501bc9b49f327/cluster-policy-controller/0.log" Mar 18 08:53:23.407699 master-0 kubenswrapper[6976]: W0318 08:53:23.404154 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdf1c657_a9dc_455a_b2fd_27a518bc5199.slice/crio-c3c61954e21feda03f422b20f9d63bd6912c405f9f67a85dab1db1f6274782fd WatchSource:0}: Error finding container c3c61954e21feda03f422b20f9d63bd6912c405f9f67a85dab1db1f6274782fd: Status 404 returned error can't find the container with id c3c61954e21feda03f422b20f9d63bd6912c405f9f67a85dab1db1f6274782fd Mar 18 08:53:23.408870 master-0 kubenswrapper[6976]: I0318 08:53:23.408370 6976 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 08:53:23.523674 master-0 kubenswrapper[6976]: I0318 08:53:23.523622 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_c83737980b9ee109184b1d78e942cf36/kube-scheduler/0.log" Mar 18 08:53:23.779714 master-0 kubenswrapper[6976]: I0318 08:53:23.779648 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4jrzp" event={"ID":"cdf1c657-a9dc-455a-b2fd-27a518bc5199","Type":"ContainerStarted","Data":"c3c61954e21feda03f422b20f9d63bd6912c405f9f67a85dab1db1f6274782fd"} Mar 18 08:53:24.111404 master-0 kubenswrapper[6976]: I0318 08:53:24.110541 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_c83737980b9ee109184b1d78e942cf36/kube-scheduler/1.log" Mar 18 08:53:24.122770 master-0 kubenswrapper[6976]: I0318 08:53:24.121797 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_3253d87f-ae48-42cf-950f-f508a9b82d0d/installer/0.log" Mar 18 08:53:24.127861 master-0 kubenswrapper[6976]: I0318 08:53:24.127718 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-cpbdr_0f9ba06c-7a6b-4f46-a747-80b0a0b58600/kube-scheduler-operator-container/1.log" Mar 18 08:53:24.678920 master-0 kubenswrapper[6976]: I0318 08:53:24.678820 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-cpbdr_0f9ba06c-7a6b-4f46-a747-80b0a0b58600/kube-scheduler-operator-container/2.log" Mar 18 08:53:24.685757 master-0 kubenswrapper[6976]: I0318 08:53:24.685713 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-m8p9p_81eefe1b-f683-4740-8fb0-0a5050f9b4a4/openshift-apiserver-operator/1.log" Mar 18 08:53:24.692443 master-0 kubenswrapper[6976]: I0318 08:53:24.692396 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-m8p9p_81eefe1b-f683-4740-8fb0-0a5050f9b4a4/openshift-apiserver-operator/2.log" Mar 18 08:53:26.359201 master-0 kubenswrapper[6976]: I0318 08:53:26.359131 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-77f845f574-2wpgz_a1f2b373-0c85-4028-9089-9e9dff5d37b5/fix-audit-permissions/0.log" Mar 18 08:53:29.816202 master-0 kubenswrapper[6976]: I0318 08:53:29.816151 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-77f845f574-2wpgz_a1f2b373-0c85-4028-9089-9e9dff5d37b5/openshift-apiserver/0.log" Mar 18 08:53:29.835592 master-0 kubenswrapper[6976]: I0318 08:53:29.835482 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-77f845f574-2wpgz_a1f2b373-0c85-4028-9089-9e9dff5d37b5/openshift-apiserver-check-endpoints/0.log" Mar 18 08:53:29.858198 master-0 kubenswrapper[6976]: I0318 08:53:29.858149 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-f2nfl_bb6ef4c4-bff3-4559-8e42-582bbd668b7c/etcd-operator/0.log" Mar 18 08:53:29.889833 master-0 kubenswrapper[6976]: I0318 08:53:29.889623 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-f2nfl_bb6ef4c4-bff3-4559-8e42-582bbd668b7c/etcd-operator/1.log" Mar 18 08:53:29.903642 master-0 kubenswrapper[6976]: I0318 08:53:29.897892 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-hhn7l_f6833a48-fccb-42bd-ac90-29f08d5bf7e8/catalog-operator/0.log" Mar 18 08:53:29.904855 master-0 kubenswrapper[6976]: I0318 08:53:29.904829 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-5c9796789-twp27_c00ee838-424f-482b-942f-08f0952a5ccd/olm-operator/0.log" Mar 18 08:53:29.915072 master-0 kubenswrapper[6976]: I0318 08:53:29.914992 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-k6xp5_2d0da6e3-3887-4361-8eae-e7447f9ff72c/kube-rbac-proxy/0.log" Mar 18 08:53:29.919845 master-0 kubenswrapper[6976]: I0318 08:53:29.919814 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-k6xp5_2d0da6e3-3887-4361-8eae-e7447f9ff72c/package-server-manager/0.log" Mar 18 08:53:29.926411 master-0 kubenswrapper[6976]: I0318 08:53:29.926362 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-c8d87f55b-gsv6r_50a2c23f-26af-4c7f-8ea6-996bcfe173d0/packageserver/0.log" Mar 18 08:53:37.081845 master-0 kubenswrapper[6976]: I0318 08:53:37.081787 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" Mar 18 08:53:37.228648 master-0 kubenswrapper[6976]: I0318 08:53:37.228550 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kckrf\" (UniqueName: \"kubernetes.io/projected/01243eca-2966-40a3-9eeb-fa3edc917717-kube-api-access-kckrf\") pod \"01243eca-2966-40a3-9eeb-fa3edc917717\" (UID: \"01243eca-2966-40a3-9eeb-fa3edc917717\") " Mar 18 08:53:37.228862 master-0 kubenswrapper[6976]: I0318 08:53:37.228689 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/01243eca-2966-40a3-9eeb-fa3edc917717-auth-proxy-config\") pod \"01243eca-2966-40a3-9eeb-fa3edc917717\" (UID: \"01243eca-2966-40a3-9eeb-fa3edc917717\") " Mar 18 08:53:37.228862 master-0 kubenswrapper[6976]: I0318 08:53:37.228718 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/01243eca-2966-40a3-9eeb-fa3edc917717-cloud-controller-manager-operator-tls\") pod \"01243eca-2966-40a3-9eeb-fa3edc917717\" (UID: \"01243eca-2966-40a3-9eeb-fa3edc917717\") " Mar 18 08:53:37.228862 master-0 kubenswrapper[6976]: I0318 08:53:37.228778 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/01243eca-2966-40a3-9eeb-fa3edc917717-images\") pod \"01243eca-2966-40a3-9eeb-fa3edc917717\" (UID: \"01243eca-2966-40a3-9eeb-fa3edc917717\") " Mar 18 08:53:37.228862 master-0 kubenswrapper[6976]: I0318 08:53:37.228834 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/01243eca-2966-40a3-9eeb-fa3edc917717-host-etc-kube\") pod \"01243eca-2966-40a3-9eeb-fa3edc917717\" (UID: \"01243eca-2966-40a3-9eeb-fa3edc917717\") " Mar 18 08:53:37.229183 master-0 kubenswrapper[6976]: I0318 08:53:37.229150 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01243eca-2966-40a3-9eeb-fa3edc917717-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "01243eca-2966-40a3-9eeb-fa3edc917717" (UID: "01243eca-2966-40a3-9eeb-fa3edc917717"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:53:37.230487 master-0 kubenswrapper[6976]: I0318 08:53:37.230432 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01243eca-2966-40a3-9eeb-fa3edc917717-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "01243eca-2966-40a3-9eeb-fa3edc917717" (UID: "01243eca-2966-40a3-9eeb-fa3edc917717"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:53:37.230575 master-0 kubenswrapper[6976]: I0318 08:53:37.230444 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01243eca-2966-40a3-9eeb-fa3edc917717-images" (OuterVolumeSpecName: "images") pod "01243eca-2966-40a3-9eeb-fa3edc917717" (UID: "01243eca-2966-40a3-9eeb-fa3edc917717"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:53:37.240675 master-0 kubenswrapper[6976]: I0318 08:53:37.240616 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01243eca-2966-40a3-9eeb-fa3edc917717-kube-api-access-kckrf" (OuterVolumeSpecName: "kube-api-access-kckrf") pod "01243eca-2966-40a3-9eeb-fa3edc917717" (UID: "01243eca-2966-40a3-9eeb-fa3edc917717"). InnerVolumeSpecName "kube-api-access-kckrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:53:37.240797 master-0 kubenswrapper[6976]: I0318 08:53:37.240765 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01243eca-2966-40a3-9eeb-fa3edc917717-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "01243eca-2966-40a3-9eeb-fa3edc917717" (UID: "01243eca-2966-40a3-9eeb-fa3edc917717"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:53:37.330180 master-0 kubenswrapper[6976]: I0318 08:53:37.330028 6976 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/01243eca-2966-40a3-9eeb-fa3edc917717-images\") on node \"master-0\" DevicePath \"\"" Mar 18 08:53:37.330180 master-0 kubenswrapper[6976]: I0318 08:53:37.330088 6976 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/01243eca-2966-40a3-9eeb-fa3edc917717-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Mar 18 08:53:37.330180 master-0 kubenswrapper[6976]: I0318 08:53:37.330104 6976 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kckrf\" (UniqueName: \"kubernetes.io/projected/01243eca-2966-40a3-9eeb-fa3edc917717-kube-api-access-kckrf\") on node \"master-0\" DevicePath \"\"" Mar 18 08:53:37.330180 master-0 kubenswrapper[6976]: I0318 08:53:37.330116 6976 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/01243eca-2966-40a3-9eeb-fa3edc917717-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:53:37.330180 master-0 kubenswrapper[6976]: I0318 08:53:37.330134 6976 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/01243eca-2966-40a3-9eeb-fa3edc917717-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Mar 18 08:53:37.928360 master-0 kubenswrapper[6976]: I0318 08:53:37.928277 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" event={"ID":"01243eca-2966-40a3-9eeb-fa3edc917717","Type":"ContainerDied","Data":"a6d0d50087a2677e8b796853bf55d588c131864c88810e12454811eaee66e456"} Mar 18 08:53:37.928360 master-0 kubenswrapper[6976]: I0318 08:53:37.928344 6976 scope.go:117] "RemoveContainer" containerID="6e72ce9a41e5ca4914523ec4625d65a11abcfde5d826b953a0bb3e17605639e5" Mar 18 08:53:37.928838 master-0 kubenswrapper[6976]: I0318 08:53:37.928478 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv" Mar 18 08:53:37.968039 master-0 kubenswrapper[6976]: I0318 08:53:37.967980 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv"] Mar 18 08:53:38.018180 master-0 kubenswrapper[6976]: I0318 08:53:38.018084 6976 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-wm4pv"] Mar 18 08:53:38.120908 master-0 kubenswrapper[6976]: I0318 08:53:38.119273 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4"] Mar 18 08:53:38.120908 master-0 kubenswrapper[6976]: E0318 08:53:38.119544 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01243eca-2966-40a3-9eeb-fa3edc917717" containerName="cluster-cloud-controller-manager" Mar 18 08:53:38.120908 master-0 kubenswrapper[6976]: I0318 08:53:38.119574 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="01243eca-2966-40a3-9eeb-fa3edc917717" containerName="cluster-cloud-controller-manager" Mar 18 08:53:38.120908 master-0 kubenswrapper[6976]: E0318 08:53:38.119602 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01243eca-2966-40a3-9eeb-fa3edc917717" containerName="config-sync-controllers" Mar 18 08:53:38.120908 master-0 kubenswrapper[6976]: I0318 08:53:38.119632 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="01243eca-2966-40a3-9eeb-fa3edc917717" containerName="config-sync-controllers" Mar 18 08:53:38.120908 master-0 kubenswrapper[6976]: E0318 08:53:38.119652 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01243eca-2966-40a3-9eeb-fa3edc917717" containerName="kube-rbac-proxy" Mar 18 08:53:38.120908 master-0 kubenswrapper[6976]: I0318 08:53:38.119663 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="01243eca-2966-40a3-9eeb-fa3edc917717" containerName="kube-rbac-proxy" Mar 18 08:53:38.120908 master-0 kubenswrapper[6976]: I0318 08:53:38.119765 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="01243eca-2966-40a3-9eeb-fa3edc917717" containerName="cluster-cloud-controller-manager" Mar 18 08:53:38.120908 master-0 kubenswrapper[6976]: I0318 08:53:38.119779 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="01243eca-2966-40a3-9eeb-fa3edc917717" containerName="config-sync-controllers" Mar 18 08:53:38.120908 master-0 kubenswrapper[6976]: I0318 08:53:38.119794 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="01243eca-2966-40a3-9eeb-fa3edc917717" containerName="kube-rbac-proxy" Mar 18 08:53:38.122706 master-0 kubenswrapper[6976]: I0318 08:53:38.121006 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 08:53:38.122788 master-0 kubenswrapper[6976]: I0318 08:53:38.122713 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 08:53:38.123138 master-0 kubenswrapper[6976]: I0318 08:53:38.123064 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-hbb9q" Mar 18 08:53:38.125086 master-0 kubenswrapper[6976]: I0318 08:53:38.123432 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 08:53:38.125086 master-0 kubenswrapper[6976]: I0318 08:53:38.123441 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 08:53:38.125086 master-0 kubenswrapper[6976]: I0318 08:53:38.123648 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 08:53:38.125086 master-0 kubenswrapper[6976]: I0318 08:53:38.124010 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 08:53:38.139652 master-0 kubenswrapper[6976]: I0318 08:53:38.139609 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 08:53:38.139854 master-0 kubenswrapper[6976]: I0318 08:53:38.139717 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 08:53:38.139854 master-0 kubenswrapper[6976]: I0318 08:53:38.139766 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr9zx\" (UniqueName: \"kubernetes.io/projected/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-kube-api-access-mr9zx\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 08:53:38.139854 master-0 kubenswrapper[6976]: I0318 08:53:38.139825 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 08:53:38.140045 master-0 kubenswrapper[6976]: I0318 08:53:38.139886 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 08:53:38.240582 master-0 kubenswrapper[6976]: I0318 08:53:38.240523 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr9zx\" (UniqueName: \"kubernetes.io/projected/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-kube-api-access-mr9zx\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 08:53:38.240777 master-0 kubenswrapper[6976]: I0318 08:53:38.240595 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 08:53:38.240777 master-0 kubenswrapper[6976]: I0318 08:53:38.240642 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 08:53:38.240777 master-0 kubenswrapper[6976]: I0318 08:53:38.240688 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 08:53:38.240777 master-0 kubenswrapper[6976]: I0318 08:53:38.240713 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 08:53:38.241362 master-0 kubenswrapper[6976]: I0318 08:53:38.241338 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 08:53:38.241955 master-0 kubenswrapper[6976]: I0318 08:53:38.241932 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 08:53:38.242391 master-0 kubenswrapper[6976]: I0318 08:53:38.242370 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 08:53:38.245503 master-0 kubenswrapper[6976]: I0318 08:53:38.245481 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 08:53:38.296511 master-0 kubenswrapper[6976]: I0318 08:53:38.296451 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr9zx\" (UniqueName: \"kubernetes.io/projected/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-kube-api-access-mr9zx\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 08:53:38.447333 master-0 kubenswrapper[6976]: I0318 08:53:38.447279 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 08:53:38.611496 master-0 kubenswrapper[6976]: I0318 08:53:38.611393 6976 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01243eca-2966-40a3-9eeb-fa3edc917717" path="/var/lib/kubelet/pods/01243eca-2966-40a3-9eeb-fa3edc917717/volumes" Mar 18 08:53:40.169778 master-0 kubenswrapper[6976]: I0318 08:53:40.169699 6976 scope.go:117] "RemoveContainer" containerID="7777a1d79f14128c1ec3e26bec66f6050a3c54aab4fef032e4afa313fab7fc66" Mar 18 08:53:40.489111 master-0 kubenswrapper[6976]: I0318 08:53:40.489082 6976 scope.go:117] "RemoveContainer" containerID="bd7092603130bec3a07549bc35a1bd4eb99757be126618dadbb88ce76a361a16" Mar 18 08:53:40.515102 master-0 kubenswrapper[6976]: W0318 08:53:40.515071 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93cb5ef1_e8f1_4d11_8c93_1abf24626176.slice/crio-25198ccffb73a61a0d44324871a4bf2386567e2212f2fa517102359c9971071f WatchSource:0}: Error finding container 25198ccffb73a61a0d44324871a4bf2386567e2212f2fa517102359c9971071f: Status 404 returned error can't find the container with id 25198ccffb73a61a0d44324871a4bf2386567e2212f2fa517102359c9971071f Mar 18 08:53:40.526911 master-0 kubenswrapper[6976]: W0318 08:53:40.526868 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14489ef7_8df3_4a3b_a137_3a78e89d425b.slice/crio-b7990ab48fdb41a5eca1f84526ed3e4682864205c2abfda2c698a85c11f23f89 WatchSource:0}: Error finding container b7990ab48fdb41a5eca1f84526ed3e4682864205c2abfda2c698a85c11f23f89: Status 404 returned error can't find the container with id b7990ab48fdb41a5eca1f84526ed3e4682864205c2abfda2c698a85c11f23f89 Mar 18 08:53:40.677528 master-0 kubenswrapper[6976]: I0318 08:53:40.677486 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-7zvkl"] Mar 18 08:53:40.684197 master-0 kubenswrapper[6976]: W0318 08:53:40.684152 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b1447b_1659_405b_81e0_21f0cf3e7a2c.slice/crio-fce4e249fbb76d05fe14f32edfd62297db6230d70d6e19d6ad7a50ec7970b217 WatchSource:0}: Error finding container fce4e249fbb76d05fe14f32edfd62297db6230d70d6e19d6ad7a50ec7970b217: Status 404 returned error can't find the container with id fce4e249fbb76d05fe14f32edfd62297db6230d70d6e19d6ad7a50ec7970b217 Mar 18 08:53:40.962825 master-0 kubenswrapper[6976]: I0318 08:53:40.960267 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4jrzp" event={"ID":"cdf1c657-a9dc-455a-b2fd-27a518bc5199","Type":"ContainerStarted","Data":"074b5618ab791b52853fa3c8e57f02c4f2b94c0be2a32f214be11143542113e2"} Mar 18 08:53:40.962825 master-0 kubenswrapper[6976]: I0318 08:53:40.960390 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4jrzp" Mar 18 08:53:40.967289 master-0 kubenswrapper[6976]: I0318 08:53:40.967177 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" event={"ID":"94e2a8f0-2c2e-43da-9fa9-69edfcd77830","Type":"ContainerStarted","Data":"77222f1857306a427ed0136d01e66abea08222205dcb9a92415c3629bd81b945"} Mar 18 08:53:40.967289 master-0 kubenswrapper[6976]: I0318 08:53:40.967214 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" event={"ID":"94e2a8f0-2c2e-43da-9fa9-69edfcd77830","Type":"ContainerStarted","Data":"7614f67ab42a92a0cedef41e5a4853cd6e5b7388a0d9d5d3571435c2df397b78"} Mar 18 08:53:40.973597 master-0 kubenswrapper[6976]: I0318 08:53:40.973544 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4jrzp" Mar 18 08:53:40.977443 master-0 kubenswrapper[6976]: I0318 08:53:40.977385 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfdcz" event={"ID":"1c322813-b574-4b46-b760-208ccecd01a5","Type":"ContainerStarted","Data":"dae73ee3ae724b2c21523292592ef38e39e0a433287c5f3b59839f74c5990e24"} Mar 18 08:53:40.987612 master-0 kubenswrapper[6976]: I0318 08:53:40.983151 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4jrzp" podStartSLOduration=27.85583824 podStartE2EDuration="44.983137616s" podCreationTimestamp="2026-03-18 08:52:56 +0000 UTC" firstStartedPulling="2026-03-18 08:53:23.408329339 +0000 UTC m=+302.993930934" lastFinishedPulling="2026-03-18 08:53:40.535628705 +0000 UTC m=+320.121230310" observedRunningTime="2026-03-18 08:53:40.982083459 +0000 UTC m=+320.567685054" watchObservedRunningTime="2026-03-18 08:53:40.983137616 +0000 UTC m=+320.568739201" Mar 18 08:53:40.991658 master-0 kubenswrapper[6976]: I0318 08:53:40.989758 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x8lj" event={"ID":"f2fcd92f-0a58-4c87-8213-715453486aca","Type":"ContainerStarted","Data":"9ac32046c5add06c7112266ce422d6cd5a84efecd46bf95a0b99b1364bf42c11"} Mar 18 08:53:40.996588 master-0 kubenswrapper[6976]: I0318 08:53:40.996173 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" event={"ID":"fdb52116-9c55-4464-99c8-fc2e4559996b","Type":"ContainerStarted","Data":"bdeb3e204eeda9a4ca5f0b606295f7a8a8b0db7e2e36aab9adc87281923f44e9"} Mar 18 08:53:41.015588 master-0 kubenswrapper[6976]: I0318 08:53:41.011996 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rw7hw" event={"ID":"14489ef7-8df3-4a3b-a137-3a78e89d425b","Type":"ContainerStarted","Data":"7860b775222b874972d5fd5d1107ea5b1b4cf97fa7ade1fff35f4957b39dd914"} Mar 18 08:53:41.015588 master-0 kubenswrapper[6976]: I0318 08:53:41.012042 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-rw7hw" event={"ID":"14489ef7-8df3-4a3b-a137-3a78e89d425b","Type":"ContainerStarted","Data":"b7990ab48fdb41a5eca1f84526ed3e4682864205c2abfda2c698a85c11f23f89"} Mar 18 08:53:41.019584 master-0 kubenswrapper[6976]: I0318 08:53:41.016113 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gpbt" event={"ID":"bf5fd4cc-959e-4878-82e9-b0f90dba6553","Type":"ContainerStarted","Data":"5a0a55d40814df39dc638c32e9fe75e6b627c413e28b2b6c92eeb933e420f49c"} Mar 18 08:53:41.039807 master-0 kubenswrapper[6976]: I0318 08:53:41.039113 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-7zvkl" event={"ID":"17b1447b-1659-405b-81e0-21f0cf3e7a2c","Type":"ContainerStarted","Data":"73fbb142de9c9826eec5cc58aba2404cc43d7f68f78818e8a04915e549a2dd8e"} Mar 18 08:53:41.039807 master-0 kubenswrapper[6976]: I0318 08:53:41.039169 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-7zvkl" event={"ID":"17b1447b-1659-405b-81e0-21f0cf3e7a2c","Type":"ContainerStarted","Data":"fce4e249fbb76d05fe14f32edfd62297db6230d70d6e19d6ad7a50ec7970b217"} Mar 18 08:53:41.047457 master-0 kubenswrapper[6976]: I0318 08:53:41.046226 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" event={"ID":"a0cd1cf7-be6f-4baf-8761-69c693476de9","Type":"ContainerStarted","Data":"99ea637f908899f3c91ea05ee2b0d7e3ac50162756d8cfe11cb446dfbb2129bd"} Mar 18 08:53:41.052981 master-0 kubenswrapper[6976]: I0318 08:53:41.052018 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4r6jd" event={"ID":"995ec82c-b593-416a-9287-6020a484855c","Type":"ContainerStarted","Data":"6158208c344c114482182b4073df205ae1396e550c8ee72baa6c0932a13e4a44"} Mar 18 08:53:41.053653 master-0 kubenswrapper[6976]: I0318 08:53:41.053298 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" event={"ID":"93cb5ef1-e8f1-4d11-8c93-1abf24626176","Type":"ContainerStarted","Data":"25198ccffb73a61a0d44324871a4bf2386567e2212f2fa517102359c9971071f"} Mar 18 08:53:41.101532 master-0 kubenswrapper[6976]: I0318 08:53:41.101210 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" podStartSLOduration=4.402478245 podStartE2EDuration="45.101185644s" podCreationTimestamp="2026-03-18 08:52:56 +0000 UTC" firstStartedPulling="2026-03-18 08:52:59.790338362 +0000 UTC m=+279.375939957" lastFinishedPulling="2026-03-18 08:53:40.489045761 +0000 UTC m=+320.074647356" observedRunningTime="2026-03-18 08:53:41.065475898 +0000 UTC m=+320.651077503" watchObservedRunningTime="2026-03-18 08:53:41.101185644 +0000 UTC m=+320.686787239" Mar 18 08:53:41.140143 master-0 kubenswrapper[6976]: I0318 08:53:41.140072 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-rw7hw" podStartSLOduration=23.140051201 podStartE2EDuration="23.140051201s" podCreationTimestamp="2026-03-18 08:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:53:41.137777963 +0000 UTC m=+320.723379558" watchObservedRunningTime="2026-03-18 08:53:41.140051201 +0000 UTC m=+320.725652796" Mar 18 08:53:41.220255 master-0 kubenswrapper[6976]: I0318 08:53:41.220198 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-7zvkl" podStartSLOduration=369.220179827 podStartE2EDuration="6m9.220179827s" podCreationTimestamp="2026-03-18 08:47:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:53:41.2179613 +0000 UTC m=+320.803562895" watchObservedRunningTime="2026-03-18 08:53:41.220179827 +0000 UTC m=+320.805781422" Mar 18 08:53:41.247289 master-0 kubenswrapper[6976]: I0318 08:53:41.247235 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" podStartSLOduration=5.280535259 podStartE2EDuration="46.247219561s" podCreationTimestamp="2026-03-18 08:52:55 +0000 UTC" firstStartedPulling="2026-03-18 08:52:59.178969049 +0000 UTC m=+278.764570644" lastFinishedPulling="2026-03-18 08:53:40.145653351 +0000 UTC m=+319.731254946" observedRunningTime="2026-03-18 08:53:41.245183398 +0000 UTC m=+320.830785003" watchObservedRunningTime="2026-03-18 08:53:41.247219561 +0000 UTC m=+320.832821156" Mar 18 08:53:41.854867 master-0 kubenswrapper[6976]: I0318 08:53:41.854822 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5"] Mar 18 08:53:41.860480 master-0 kubenswrapper[6976]: I0318 08:53:41.855957 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 08:53:41.861846 master-0 kubenswrapper[6976]: I0318 08:53:41.861752 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 18 08:53:41.867613 master-0 kubenswrapper[6976]: I0318 08:53:41.867534 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 18 08:53:41.867807 master-0 kubenswrapper[6976]: I0318 08:53:41.867630 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 18 08:53:41.867807 master-0 kubenswrapper[6976]: I0318 08:53:41.867718 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-mbtdj" Mar 18 08:53:41.885044 master-0 kubenswrapper[6976]: I0318 08:53:41.884956 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5"] Mar 18 08:53:42.014733 master-0 kubenswrapper[6976]: I0318 08:53:42.013444 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8683c8c6-3a77-4b46-8898-142f9781b49c-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-rqgh5\" (UID: \"8683c8c6-3a77-4b46-8898-142f9781b49c\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 08:53:42.014733 master-0 kubenswrapper[6976]: I0318 08:53:42.013534 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/8683c8c6-3a77-4b46-8898-142f9781b49c-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-rqgh5\" (UID: \"8683c8c6-3a77-4b46-8898-142f9781b49c\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 08:53:42.014733 master-0 kubenswrapper[6976]: I0318 08:53:42.013632 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8683c8c6-3a77-4b46-8898-142f9781b49c-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-rqgh5\" (UID: \"8683c8c6-3a77-4b46-8898-142f9781b49c\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 08:53:42.014733 master-0 kubenswrapper[6976]: I0318 08:53:42.013672 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g42f4\" (UniqueName: \"kubernetes.io/projected/8683c8c6-3a77-4b46-8898-142f9781b49c-kube-api-access-g42f4\") pod \"prometheus-operator-6c8df6d4b-rqgh5\" (UID: \"8683c8c6-3a77-4b46-8898-142f9781b49c\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 08:53:42.076844 master-0 kubenswrapper[6976]: I0318 08:53:42.076801 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" event={"ID":"94e2a8f0-2c2e-43da-9fa9-69edfcd77830","Type":"ContainerStarted","Data":"9dbd5259fa451b69d11a8bc83167e67bddf6f95e3acdb32c9bfafbbdb85570c2"} Mar 18 08:53:42.077067 master-0 kubenswrapper[6976]: I0318 08:53:42.077051 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" event={"ID":"94e2a8f0-2c2e-43da-9fa9-69edfcd77830","Type":"ContainerStarted","Data":"096ac353f933435e5c018fb15b66b68ffb3a1e47071e3f93549e3c9af4316fb4"} Mar 18 08:53:42.082498 master-0 kubenswrapper[6976]: I0318 08:53:42.082442 6976 generic.go:334] "Generic (PLEG): container finished" podID="bf5fd4cc-959e-4878-82e9-b0f90dba6553" containerID="5a0a55d40814df39dc638c32e9fe75e6b627c413e28b2b6c92eeb933e420f49c" exitCode=0 Mar 18 08:53:42.082780 master-0 kubenswrapper[6976]: I0318 08:53:42.082531 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gpbt" event={"ID":"bf5fd4cc-959e-4878-82e9-b0f90dba6553","Type":"ContainerDied","Data":"5a0a55d40814df39dc638c32e9fe75e6b627c413e28b2b6c92eeb933e420f49c"} Mar 18 08:53:42.082780 master-0 kubenswrapper[6976]: I0318 08:53:42.082603 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2gpbt" event={"ID":"bf5fd4cc-959e-4878-82e9-b0f90dba6553","Type":"ContainerStarted","Data":"31efbe7a47ff82a06de20a764eefe9b2a1aafc2ba9076aca3a715ba619680c8e"} Mar 18 08:53:42.085015 master-0 kubenswrapper[6976]: I0318 08:53:42.084982 6976 generic.go:334] "Generic (PLEG): container finished" podID="1c322813-b574-4b46-b760-208ccecd01a5" containerID="dae73ee3ae724b2c21523292592ef38e39e0a433287c5f3b59839f74c5990e24" exitCode=0 Mar 18 08:53:42.085126 master-0 kubenswrapper[6976]: I0318 08:53:42.085044 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfdcz" event={"ID":"1c322813-b574-4b46-b760-208ccecd01a5","Type":"ContainerDied","Data":"dae73ee3ae724b2c21523292592ef38e39e0a433287c5f3b59839f74c5990e24"} Mar 18 08:53:42.087954 master-0 kubenswrapper[6976]: I0318 08:53:42.087922 6976 generic.go:334] "Generic (PLEG): container finished" podID="f2fcd92f-0a58-4c87-8213-715453486aca" containerID="9ac32046c5add06c7112266ce422d6cd5a84efecd46bf95a0b99b1364bf42c11" exitCode=0 Mar 18 08:53:42.088064 master-0 kubenswrapper[6976]: I0318 08:53:42.088005 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x8lj" event={"ID":"f2fcd92f-0a58-4c87-8213-715453486aca","Type":"ContainerDied","Data":"9ac32046c5add06c7112266ce422d6cd5a84efecd46bf95a0b99b1364bf42c11"} Mar 18 08:53:42.089594 master-0 kubenswrapper[6976]: I0318 08:53:42.089551 6976 generic.go:334] "Generic (PLEG): container finished" podID="995ec82c-b593-416a-9287-6020a484855c" containerID="6158208c344c114482182b4073df205ae1396e550c8ee72baa6c0932a13e4a44" exitCode=0 Mar 18 08:53:42.089858 master-0 kubenswrapper[6976]: I0318 08:53:42.089717 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4r6jd" event={"ID":"995ec82c-b593-416a-9287-6020a484855c","Type":"ContainerDied","Data":"6158208c344c114482182b4073df205ae1396e550c8ee72baa6c0932a13e4a44"} Mar 18 08:53:42.115461 master-0 kubenswrapper[6976]: I0318 08:53:42.115401 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8683c8c6-3a77-4b46-8898-142f9781b49c-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-rqgh5\" (UID: \"8683c8c6-3a77-4b46-8898-142f9781b49c\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 08:53:42.115642 master-0 kubenswrapper[6976]: I0318 08:53:42.115477 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/8683c8c6-3a77-4b46-8898-142f9781b49c-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-rqgh5\" (UID: \"8683c8c6-3a77-4b46-8898-142f9781b49c\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 08:53:42.115642 master-0 kubenswrapper[6976]: I0318 08:53:42.115598 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8683c8c6-3a77-4b46-8898-142f9781b49c-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-rqgh5\" (UID: \"8683c8c6-3a77-4b46-8898-142f9781b49c\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 08:53:42.116296 master-0 kubenswrapper[6976]: I0318 08:53:42.116258 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g42f4\" (UniqueName: \"kubernetes.io/projected/8683c8c6-3a77-4b46-8898-142f9781b49c-kube-api-access-g42f4\") pod \"prometheus-operator-6c8df6d4b-rqgh5\" (UID: \"8683c8c6-3a77-4b46-8898-142f9781b49c\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 08:53:42.116948 master-0 kubenswrapper[6976]: I0318 08:53:42.116894 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8683c8c6-3a77-4b46-8898-142f9781b49c-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-rqgh5\" (UID: \"8683c8c6-3a77-4b46-8898-142f9781b49c\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 08:53:42.121196 master-0 kubenswrapper[6976]: I0318 08:53:42.121164 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8683c8c6-3a77-4b46-8898-142f9781b49c-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-rqgh5\" (UID: \"8683c8c6-3a77-4b46-8898-142f9781b49c\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 08:53:42.122351 master-0 kubenswrapper[6976]: I0318 08:53:42.122314 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/8683c8c6-3a77-4b46-8898-142f9781b49c-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-rqgh5\" (UID: \"8683c8c6-3a77-4b46-8898-142f9781b49c\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 08:53:42.129975 master-0 kubenswrapper[6976]: I0318 08:53:42.129850 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2gpbt" podStartSLOduration=19.600486186 podStartE2EDuration="58.129823233s" podCreationTimestamp="2026-03-18 08:52:44 +0000 UTC" firstStartedPulling="2026-03-18 08:53:03.034409304 +0000 UTC m=+282.620010899" lastFinishedPulling="2026-03-18 08:53:41.563746341 +0000 UTC m=+321.149347946" observedRunningTime="2026-03-18 08:53:42.127492753 +0000 UTC m=+321.713094348" watchObservedRunningTime="2026-03-18 08:53:42.129823233 +0000 UTC m=+321.715424828" Mar 18 08:53:42.130118 master-0 kubenswrapper[6976]: I0318 08:53:42.130087 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" podStartSLOduration=4.130079909 podStartE2EDuration="4.130079909s" podCreationTimestamp="2026-03-18 08:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:53:42.107604913 +0000 UTC m=+321.693206508" watchObservedRunningTime="2026-03-18 08:53:42.130079909 +0000 UTC m=+321.715681504" Mar 18 08:53:42.135994 master-0 kubenswrapper[6976]: I0318 08:53:42.135962 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g42f4\" (UniqueName: \"kubernetes.io/projected/8683c8c6-3a77-4b46-8898-142f9781b49c-kube-api-access-g42f4\") pod \"prometheus-operator-6c8df6d4b-rqgh5\" (UID: \"8683c8c6-3a77-4b46-8898-142f9781b49c\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 08:53:42.203698 master-0 kubenswrapper[6976]: I0318 08:53:42.203648 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 08:53:43.102559 master-0 kubenswrapper[6976]: I0318 08:53:43.102497 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" event={"ID":"93cb5ef1-e8f1-4d11-8c93-1abf24626176","Type":"ContainerStarted","Data":"ea2c5251f8b00aeeac7b68834229738af66c558b5a20fbe3cc0b6efb0ce7e30a"} Mar 18 08:53:43.114381 master-0 kubenswrapper[6976]: I0318 08:53:43.114339 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5"] Mar 18 08:53:43.121171 master-0 kubenswrapper[6976]: W0318 08:53:43.121094 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8683c8c6_3a77_4b46_8898_142f9781b49c.slice/crio-08f21128e07d665939c2d0c41577d2352ec3b22e6dbd82f3846839a110c79e2d WatchSource:0}: Error finding container 08f21128e07d665939c2d0c41577d2352ec3b22e6dbd82f3846839a110c79e2d: Status 404 returned error can't find the container with id 08f21128e07d665939c2d0c41577d2352ec3b22e6dbd82f3846839a110c79e2d Mar 18 08:53:43.140228 master-0 kubenswrapper[6976]: I0318 08:53:43.140164 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podStartSLOduration=276.929575962 podStartE2EDuration="4m39.14012327s" podCreationTimestamp="2026-03-18 08:49:04 +0000 UTC" firstStartedPulling="2026-03-18 08:53:40.522090648 +0000 UTC m=+320.107692243" lastFinishedPulling="2026-03-18 08:53:42.732637956 +0000 UTC m=+322.318239551" observedRunningTime="2026-03-18 08:53:43.137180514 +0000 UTC m=+322.722782119" watchObservedRunningTime="2026-03-18 08:53:43.14012327 +0000 UTC m=+322.725724865" Mar 18 08:53:43.277776 master-0 kubenswrapper[6976]: I0318 08:53:43.277739 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:53:43.280155 master-0 kubenswrapper[6976]: I0318 08:53:43.280114 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:43.280155 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:53:43.280155 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:53:43.280155 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:53:43.280346 master-0 kubenswrapper[6976]: I0318 08:53:43.280155 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:44.110260 master-0 kubenswrapper[6976]: I0318 08:53:44.110211 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" event={"ID":"8683c8c6-3a77-4b46-8898-142f9781b49c","Type":"ContainerStarted","Data":"08f21128e07d665939c2d0c41577d2352ec3b22e6dbd82f3846839a110c79e2d"} Mar 18 08:53:44.112054 master-0 kubenswrapper[6976]: I0318 08:53:44.112028 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x8lj" event={"ID":"f2fcd92f-0a58-4c87-8213-715453486aca","Type":"ContainerStarted","Data":"8235b69466948054ac05d8ad3a041295acd53d54ba685ece2bfa696634ad4617"} Mar 18 08:53:44.114506 master-0 kubenswrapper[6976]: I0318 08:53:44.114480 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4r6jd" event={"ID":"995ec82c-b593-416a-9287-6020a484855c","Type":"ContainerStarted","Data":"921516b4b32dff225b1e26105c1ec698a1dd0b96605e0be025eb50b048a2a1d7"} Mar 18 08:53:44.117777 master-0 kubenswrapper[6976]: I0318 08:53:44.117742 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfdcz" event={"ID":"1c322813-b574-4b46-b760-208ccecd01a5","Type":"ContainerStarted","Data":"300a0787e5e0393d29e420a9f80fcf41d5c3b4182d67e98379763a9cef852f5a"} Mar 18 08:53:44.137901 master-0 kubenswrapper[6976]: I0318 08:53:44.137818 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5x8lj" podStartSLOduration=18.161973419 podStartE2EDuration="1m1.137791754s" podCreationTimestamp="2026-03-18 08:52:43 +0000 UTC" firstStartedPulling="2026-03-18 08:53:00.166803719 +0000 UTC m=+279.752405314" lastFinishedPulling="2026-03-18 08:53:43.142622044 +0000 UTC m=+322.728223649" observedRunningTime="2026-03-18 08:53:44.137784984 +0000 UTC m=+323.723386579" watchObservedRunningTime="2026-03-18 08:53:44.137791754 +0000 UTC m=+323.723393349" Mar 18 08:53:44.160657 master-0 kubenswrapper[6976]: I0318 08:53:44.158821 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4r6jd" podStartSLOduration=16.261933574 podStartE2EDuration="59.158790653s" podCreationTimestamp="2026-03-18 08:52:45 +0000 UTC" firstStartedPulling="2026-03-18 08:53:00.160656092 +0000 UTC m=+279.746257687" lastFinishedPulling="2026-03-18 08:53:43.057513161 +0000 UTC m=+322.643114766" observedRunningTime="2026-03-18 08:53:44.15789772 +0000 UTC m=+323.743499335" watchObservedRunningTime="2026-03-18 08:53:44.158790653 +0000 UTC m=+323.744392248" Mar 18 08:53:44.182082 master-0 kubenswrapper[6976]: I0318 08:53:44.181992 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nfdcz" podStartSLOduration=21.065406432 podStartE2EDuration="1m1.181969387s" podCreationTimestamp="2026-03-18 08:52:43 +0000 UTC" firstStartedPulling="2026-03-18 08:53:03.029285532 +0000 UTC m=+282.614887127" lastFinishedPulling="2026-03-18 08:53:43.145848487 +0000 UTC m=+322.731450082" observedRunningTime="2026-03-18 08:53:44.18091517 +0000 UTC m=+323.766516765" watchObservedRunningTime="2026-03-18 08:53:44.181969387 +0000 UTC m=+323.767570992" Mar 18 08:53:44.284801 master-0 kubenswrapper[6976]: I0318 08:53:44.284598 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:44.284801 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:53:44.284801 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:53:44.284801 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:53:44.286119 master-0 kubenswrapper[6976]: I0318 08:53:44.284684 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:45.277464 master-0 kubenswrapper[6976]: I0318 08:53:45.277394 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:53:45.282685 master-0 kubenswrapper[6976]: I0318 08:53:45.282651 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:45.282685 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:53:45.282685 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:53:45.282685 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:53:45.284629 master-0 kubenswrapper[6976]: I0318 08:53:45.282705 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:46.131020 master-0 kubenswrapper[6976]: I0318 08:53:46.130454 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" event={"ID":"8683c8c6-3a77-4b46-8898-142f9781b49c","Type":"ContainerStarted","Data":"b5b9ca536caf770f0ba1de103de6e321450476ea8702fa52d5c6a270b29c3022"} Mar 18 08:53:46.279789 master-0 kubenswrapper[6976]: I0318 08:53:46.279738 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:46.279789 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:53:46.279789 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:53:46.279789 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:53:46.280580 master-0 kubenswrapper[6976]: I0318 08:53:46.280529 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:47.138684 master-0 kubenswrapper[6976]: I0318 08:53:47.138553 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" event={"ID":"8683c8c6-3a77-4b46-8898-142f9781b49c","Type":"ContainerStarted","Data":"c9fbc22e22ea089f78af57801b4c81d963b0876341b5a891ee3976e09b81d8f1"} Mar 18 08:53:47.162606 master-0 kubenswrapper[6976]: I0318 08:53:47.162501 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" podStartSLOduration=3.6859907769999998 podStartE2EDuration="6.162481858s" podCreationTimestamp="2026-03-18 08:53:41 +0000 UTC" firstStartedPulling="2026-03-18 08:53:43.134311551 +0000 UTC m=+322.719913156" lastFinishedPulling="2026-03-18 08:53:45.610802642 +0000 UTC m=+325.196404237" observedRunningTime="2026-03-18 08:53:47.160036745 +0000 UTC m=+326.745638370" watchObservedRunningTime="2026-03-18 08:53:47.162481858 +0000 UTC m=+326.748083453" Mar 18 08:53:47.280558 master-0 kubenswrapper[6976]: I0318 08:53:47.280478 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:47.280558 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:53:47.280558 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:53:47.280558 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:53:47.281284 master-0 kubenswrapper[6976]: I0318 08:53:47.280602 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:48.279378 master-0 kubenswrapper[6976]: I0318 08:53:48.279298 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:48.279378 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:53:48.279378 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:53:48.279378 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:53:48.279991 master-0 kubenswrapper[6976]: I0318 08:53:48.279382 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:48.445461 master-0 kubenswrapper[6976]: I0318 08:53:48.445410 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 08:53:48.445951 master-0 kubenswrapper[6976]: I0318 08:53:48.445516 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 08:53:48.507071 master-0 kubenswrapper[6976]: I0318 08:53:48.507000 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 08:53:48.507225 master-0 kubenswrapper[6976]: I0318 08:53:48.507099 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 08:53:48.519660 master-0 kubenswrapper[6976]: I0318 08:53:48.519539 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 08:53:48.660498 master-0 kubenswrapper[6976]: I0318 08:53:48.660356 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nfdcz" Mar 18 08:53:48.660498 master-0 kubenswrapper[6976]: I0318 08:53:48.660444 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nfdcz" Mar 18 08:53:48.684190 master-0 kubenswrapper[6976]: I0318 08:53:48.684136 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 08:53:48.684190 master-0 kubenswrapper[6976]: I0318 08:53:48.684187 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 08:53:48.705131 master-0 kubenswrapper[6976]: I0318 08:53:48.705068 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nfdcz" Mar 18 08:53:48.723086 master-0 kubenswrapper[6976]: I0318 08:53:48.723031 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 08:53:49.205313 master-0 kubenswrapper[6976]: I0318 08:53:49.205267 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 08:53:49.209228 master-0 kubenswrapper[6976]: I0318 08:53:49.209195 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n"] Mar 18 08:53:49.210181 master-0 kubenswrapper[6976]: I0318 08:53:49.210160 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 08:53:49.216862 master-0 kubenswrapper[6976]: I0318 08:53:49.216833 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nfdcz" Mar 18 08:53:49.217061 master-0 kubenswrapper[6976]: I0318 08:53:49.216882 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-s9qtf" Mar 18 08:53:49.217286 master-0 kubenswrapper[6976]: I0318 08:53:49.216919 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 18 08:53:49.219205 master-0 kubenswrapper[6976]: I0318 08:53:49.219169 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 18 08:53:49.220805 master-0 kubenswrapper[6976]: I0318 08:53:49.220770 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 08:53:49.224995 master-0 kubenswrapper[6976]: I0318 08:53:49.224965 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-kp8pg"] Mar 18 08:53:49.226240 master-0 kubenswrapper[6976]: I0318 08:53:49.226223 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.227888 master-0 kubenswrapper[6976]: I0318 08:53:49.227727 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 18 08:53:49.228132 master-0 kubenswrapper[6976]: I0318 08:53:49.228101 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-29bbg" Mar 18 08:53:49.229972 master-0 kubenswrapper[6976]: I0318 08:53:49.228222 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 18 08:53:49.240872 master-0 kubenswrapper[6976]: I0318 08:53:49.240829 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n"] Mar 18 08:53:49.253819 master-0 kubenswrapper[6976]: I0318 08:53:49.252364 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf"] Mar 18 08:53:49.253819 master-0 kubenswrapper[6976]: I0318 08:53:49.253337 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 08:53:49.257335 master-0 kubenswrapper[6976]: I0318 08:53:49.257306 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 18 08:53:49.257640 master-0 kubenswrapper[6976]: I0318 08:53:49.257613 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-m2754" Mar 18 08:53:49.257803 master-0 kubenswrapper[6976]: I0318 08:53:49.257785 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 18 08:53:49.257922 master-0 kubenswrapper[6976]: I0318 08:53:49.257905 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 18 08:53:49.279699 master-0 kubenswrapper[6976]: I0318 08:53:49.279661 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:49.279699 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:53:49.279699 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:53:49.279699 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:53:49.279944 master-0 kubenswrapper[6976]: I0318 08:53:49.279708 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:49.288269 master-0 kubenswrapper[6976]: I0318 08:53:49.288215 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf"] Mar 18 08:53:49.348299 master-0 kubenswrapper[6976]: I0318 08:53:49.348267 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.348441 master-0 kubenswrapper[6976]: I0318 08:53:49.348426 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqgbr\" (UniqueName: \"kubernetes.io/projected/2b59dbf5-0a61-4981-aed3-e73550615c4a-kube-api-access-nqgbr\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 08:53:49.348527 master-0 kubenswrapper[6976]: I0318 08:53:49.348512 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 08:53:49.348625 master-0 kubenswrapper[6976]: I0318 08:53:49.348613 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2b59dbf5-0a61-4981-aed3-e73550615c4a-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 08:53:49.348713 master-0 kubenswrapper[6976]: I0318 08:53:49.348698 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2b59dbf5-0a61-4981-aed3-e73550615c4a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 08:53:49.348803 master-0 kubenswrapper[6976]: I0318 08:53:49.348778 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-wtmp\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.348867 master-0 kubenswrapper[6976]: I0318 08:53:49.348856 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/599418d3-6afa-46ab-9afa-659134f7ac94-metrics-client-ca\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.348941 master-0 kubenswrapper[6976]: I0318 08:53:49.348930 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/599418d3-6afa-46ab-9afa-659134f7ac94-sys\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.349004 master-0 kubenswrapper[6976]: I0318 08:53:49.348993 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/599418d3-6afa-46ab-9afa-659134f7ac94-root\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.349077 master-0 kubenswrapper[6976]: I0318 08:53:49.349066 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-tls\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.349145 master-0 kubenswrapper[6976]: I0318 08:53:49.349132 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 08:53:49.349315 master-0 kubenswrapper[6976]: I0318 08:53:49.349285 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b59dbf5-0a61-4981-aed3-e73550615c4a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 08:53:49.349366 master-0 kubenswrapper[6976]: I0318 08:53:49.349343 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 08:53:49.349396 master-0 kubenswrapper[6976]: I0318 08:53:49.349381 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-774fx\" (UniqueName: \"kubernetes.io/projected/599418d3-6afa-46ab-9afa-659134f7ac94-kube-api-access-774fx\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.349442 master-0 kubenswrapper[6976]: I0318 08:53:49.349424 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2mwd\" (UniqueName: \"kubernetes.io/projected/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-api-access-m2mwd\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 08:53:49.349483 master-0 kubenswrapper[6976]: I0318 08:53:49.349453 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-textfile\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.349514 master-0 kubenswrapper[6976]: I0318 08:53:49.349506 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 08:53:49.349545 master-0 kubenswrapper[6976]: I0318 08:53:49.349522 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 08:53:49.450588 master-0 kubenswrapper[6976]: I0318 08:53:49.450363 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqgbr\" (UniqueName: \"kubernetes.io/projected/2b59dbf5-0a61-4981-aed3-e73550615c4a-kube-api-access-nqgbr\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 08:53:49.450588 master-0 kubenswrapper[6976]: I0318 08:53:49.450444 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 08:53:49.450588 master-0 kubenswrapper[6976]: I0318 08:53:49.450481 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2b59dbf5-0a61-4981-aed3-e73550615c4a-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 08:53:49.450588 master-0 kubenswrapper[6976]: I0318 08:53:49.450510 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2b59dbf5-0a61-4981-aed3-e73550615c4a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 08:53:49.450588 master-0 kubenswrapper[6976]: I0318 08:53:49.450534 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-wtmp\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.450588 master-0 kubenswrapper[6976]: I0318 08:53:49.450556 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/599418d3-6afa-46ab-9afa-659134f7ac94-metrics-client-ca\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.451195 master-0 kubenswrapper[6976]: I0318 08:53:49.450602 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/599418d3-6afa-46ab-9afa-659134f7ac94-sys\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.451195 master-0 kubenswrapper[6976]: I0318 08:53:49.450624 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/599418d3-6afa-46ab-9afa-659134f7ac94-root\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.451195 master-0 kubenswrapper[6976]: I0318 08:53:49.450649 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-tls\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.451195 master-0 kubenswrapper[6976]: I0318 08:53:49.450670 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 08:53:49.451195 master-0 kubenswrapper[6976]: I0318 08:53:49.450699 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b59dbf5-0a61-4981-aed3-e73550615c4a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 08:53:49.451195 master-0 kubenswrapper[6976]: I0318 08:53:49.450730 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 08:53:49.451195 master-0 kubenswrapper[6976]: I0318 08:53:49.450782 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-774fx\" (UniqueName: \"kubernetes.io/projected/599418d3-6afa-46ab-9afa-659134f7ac94-kube-api-access-774fx\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.451195 master-0 kubenswrapper[6976]: I0318 08:53:49.450813 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2mwd\" (UniqueName: \"kubernetes.io/projected/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-api-access-m2mwd\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 08:53:49.451195 master-0 kubenswrapper[6976]: I0318 08:53:49.450840 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-textfile\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.451195 master-0 kubenswrapper[6976]: I0318 08:53:49.450876 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 08:53:49.451195 master-0 kubenswrapper[6976]: I0318 08:53:49.450902 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 08:53:49.451195 master-0 kubenswrapper[6976]: I0318 08:53:49.450952 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.451748 master-0 kubenswrapper[6976]: E0318 08:53:49.451723 6976 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Mar 18 08:53:49.451871 master-0 kubenswrapper[6976]: E0318 08:53:49.451858 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-tls podName:15798f4d-8bcc-4e24-bb18-8dff1f4edf59 nodeName:}" failed. No retries permitted until 2026-03-18 08:53:49.951840658 +0000 UTC m=+329.537442253 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-tls") pod "kube-state-metrics-7bbc969446-nbkgf" (UID: "15798f4d-8bcc-4e24-bb18-8dff1f4edf59") : secret "kube-state-metrics-tls" not found Mar 18 08:53:49.452128 master-0 kubenswrapper[6976]: I0318 08:53:49.452083 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 08:53:49.452207 master-0 kubenswrapper[6976]: E0318 08:53:49.452183 6976 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Mar 18 08:53:49.452258 master-0 kubenswrapper[6976]: E0318 08:53:49.452229 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-tls podName:599418d3-6afa-46ab-9afa-659134f7ac94 nodeName:}" failed. No retries permitted until 2026-03-18 08:53:49.952216877 +0000 UTC m=+329.537818562 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-tls") pod "node-exporter-kp8pg" (UID: "599418d3-6afa-46ab-9afa-659134f7ac94") : secret "node-exporter-tls" not found Mar 18 08:53:49.452832 master-0 kubenswrapper[6976]: I0318 08:53:49.452313 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/599418d3-6afa-46ab-9afa-659134f7ac94-root\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.452832 master-0 kubenswrapper[6976]: I0318 08:53:49.452445 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-textfile\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.452832 master-0 kubenswrapper[6976]: E0318 08:53:49.452803 6976 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Mar 18 08:53:49.452983 master-0 kubenswrapper[6976]: I0318 08:53:49.452828 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 08:53:49.452983 master-0 kubenswrapper[6976]: E0318 08:53:49.452847 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b59dbf5-0a61-4981-aed3-e73550615c4a-openshift-state-metrics-tls podName:2b59dbf5-0a61-4981-aed3-e73550615c4a nodeName:}" failed. No retries permitted until 2026-03-18 08:53:49.952832963 +0000 UTC m=+329.538434558 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/2b59dbf5-0a61-4981-aed3-e73550615c4a-openshift-state-metrics-tls") pod "openshift-state-metrics-5dc6c74576-rm78n" (UID: "2b59dbf5-0a61-4981-aed3-e73550615c4a") : secret "openshift-state-metrics-tls" not found Mar 18 08:53:49.452983 master-0 kubenswrapper[6976]: I0318 08:53:49.452866 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/599418d3-6afa-46ab-9afa-659134f7ac94-sys\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.454667 master-0 kubenswrapper[6976]: I0318 08:53:49.453345 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2b59dbf5-0a61-4981-aed3-e73550615c4a-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 08:53:49.454667 master-0 kubenswrapper[6976]: I0318 08:53:49.453393 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/599418d3-6afa-46ab-9afa-659134f7ac94-metrics-client-ca\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.454667 master-0 kubenswrapper[6976]: I0318 08:53:49.453406 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-wtmp\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.454667 master-0 kubenswrapper[6976]: I0318 08:53:49.453506 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 08:53:49.465636 master-0 kubenswrapper[6976]: I0318 08:53:49.455153 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2b59dbf5-0a61-4981-aed3-e73550615c4a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 08:53:49.465636 master-0 kubenswrapper[6976]: I0318 08:53:49.464030 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.470116 master-0 kubenswrapper[6976]: I0318 08:53:49.469735 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 08:53:49.473665 master-0 kubenswrapper[6976]: I0318 08:53:49.473244 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqgbr\" (UniqueName: \"kubernetes.io/projected/2b59dbf5-0a61-4981-aed3-e73550615c4a-kube-api-access-nqgbr\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 08:53:49.478586 master-0 kubenswrapper[6976]: I0318 08:53:49.476250 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2mwd\" (UniqueName: \"kubernetes.io/projected/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-api-access-m2mwd\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 08:53:49.478586 master-0 kubenswrapper[6976]: I0318 08:53:49.477924 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-774fx\" (UniqueName: \"kubernetes.io/projected/599418d3-6afa-46ab-9afa-659134f7ac94-kube-api-access-774fx\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.563346 master-0 kubenswrapper[6976]: I0318 08:53:49.563274 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4r6jd" podUID="995ec82c-b593-416a-9287-6020a484855c" containerName="registry-server" probeResult="failure" output=< Mar 18 08:53:49.563346 master-0 kubenswrapper[6976]: timeout: failed to connect service ":50051" within 1s Mar 18 08:53:49.563346 master-0 kubenswrapper[6976]: > Mar 18 08:53:49.956721 master-0 kubenswrapper[6976]: I0318 08:53:49.956678 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-tls\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:49.956912 master-0 kubenswrapper[6976]: I0318 08:53:49.956726 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 08:53:49.956963 master-0 kubenswrapper[6976]: I0318 08:53:49.956899 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b59dbf5-0a61-4981-aed3-e73550615c4a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 08:53:49.960050 master-0 kubenswrapper[6976]: I0318 08:53:49.960009 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 08:53:49.960163 master-0 kubenswrapper[6976]: I0318 08:53:49.960141 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b59dbf5-0a61-4981-aed3-e73550615c4a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 08:53:49.961141 master-0 kubenswrapper[6976]: I0318 08:53:49.961097 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-tls\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:50.167173 master-0 kubenswrapper[6976]: I0318 08:53:50.167071 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 08:53:50.243283 master-0 kubenswrapper[6976]: I0318 08:53:50.243134 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 08:53:50.258071 master-0 kubenswrapper[6976]: I0318 08:53:50.258028 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 08:53:50.285624 master-0 kubenswrapper[6976]: W0318 08:53:50.284857 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod599418d3_6afa_46ab_9afa_659134f7ac94.slice/crio-a4cdf17679fe34b2ebe526ed953d298c257540b9e977b6d7801fbe8541796904 WatchSource:0}: Error finding container a4cdf17679fe34b2ebe526ed953d298c257540b9e977b6d7801fbe8541796904: Status 404 returned error can't find the container with id a4cdf17679fe34b2ebe526ed953d298c257540b9e977b6d7801fbe8541796904 Mar 18 08:53:50.285624 master-0 kubenswrapper[6976]: I0318 08:53:50.284870 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:50.285624 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:53:50.285624 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:53:50.285624 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:53:50.285624 master-0 kubenswrapper[6976]: I0318 08:53:50.284939 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:50.644702 master-0 kubenswrapper[6976]: I0318 08:53:50.644141 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n"] Mar 18 08:53:50.649969 master-0 kubenswrapper[6976]: W0318 08:53:50.649879 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b59dbf5_0a61_4981_aed3_e73550615c4a.slice/crio-f98590df5fb100e44d681ee1b32da7aae204b0a80ffd37a0aa1296d9ed5c3ed5 WatchSource:0}: Error finding container f98590df5fb100e44d681ee1b32da7aae204b0a80ffd37a0aa1296d9ed5c3ed5: Status 404 returned error can't find the container with id f98590df5fb100e44d681ee1b32da7aae204b0a80ffd37a0aa1296d9ed5c3ed5 Mar 18 08:53:50.682556 master-0 kubenswrapper[6976]: I0318 08:53:50.682440 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf"] Mar 18 08:53:51.173439 master-0 kubenswrapper[6976]: I0318 08:53:51.173363 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" event={"ID":"15798f4d-8bcc-4e24-bb18-8dff1f4edf59","Type":"ContainerStarted","Data":"317bca26800a314970aa73cabc27ffb650dc50aed545acb8b5a9d2409b853eae"} Mar 18 08:53:51.174625 master-0 kubenswrapper[6976]: I0318 08:53:51.174510 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-kp8pg" event={"ID":"599418d3-6afa-46ab-9afa-659134f7ac94","Type":"ContainerStarted","Data":"a4cdf17679fe34b2ebe526ed953d298c257540b9e977b6d7801fbe8541796904"} Mar 18 08:53:51.176441 master-0 kubenswrapper[6976]: I0318 08:53:51.176352 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" event={"ID":"2b59dbf5-0a61-4981-aed3-e73550615c4a","Type":"ContainerStarted","Data":"35b00764e79eec249ee745a1195b0dbc54c07b349fa58bef6d89cdb62810486b"} Mar 18 08:53:51.176527 master-0 kubenswrapper[6976]: I0318 08:53:51.176461 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" event={"ID":"2b59dbf5-0a61-4981-aed3-e73550615c4a","Type":"ContainerStarted","Data":"a4e2dec22c8e42c4f16f41d8c6b19e59c136fb8255fc4170b8f4700bd3f27a80"} Mar 18 08:53:51.176579 master-0 kubenswrapper[6976]: I0318 08:53:51.176532 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" event={"ID":"2b59dbf5-0a61-4981-aed3-e73550615c4a","Type":"ContainerStarted","Data":"f98590df5fb100e44d681ee1b32da7aae204b0a80ffd37a0aa1296d9ed5c3ed5"} Mar 18 08:53:51.279362 master-0 kubenswrapper[6976]: I0318 08:53:51.279289 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:51.279362 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:53:51.279362 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:53:51.279362 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:53:51.279738 master-0 kubenswrapper[6976]: I0318 08:53:51.279398 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:52.184320 master-0 kubenswrapper[6976]: I0318 08:53:52.184121 6976 generic.go:334] "Generic (PLEG): container finished" podID="599418d3-6afa-46ab-9afa-659134f7ac94" containerID="be9197abb6a4f7b0149993aa1f56516c44e239640ef2e0e8bd7924f48826c43c" exitCode=0 Mar 18 08:53:52.184320 master-0 kubenswrapper[6976]: I0318 08:53:52.184170 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-kp8pg" event={"ID":"599418d3-6afa-46ab-9afa-659134f7ac94","Type":"ContainerDied","Data":"be9197abb6a4f7b0149993aa1f56516c44e239640ef2e0e8bd7924f48826c43c"} Mar 18 08:53:52.279892 master-0 kubenswrapper[6976]: I0318 08:53:52.279841 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:52.279892 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:53:52.279892 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:53:52.279892 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:53:52.280187 master-0 kubenswrapper[6976]: I0318 08:53:52.279914 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:53.280215 master-0 kubenswrapper[6976]: I0318 08:53:53.280139 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:53.280215 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:53:53.280215 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:53:53.280215 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:53:53.280974 master-0 kubenswrapper[6976]: I0318 08:53:53.280223 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:54.199801 master-0 kubenswrapper[6976]: I0318 08:53:54.199688 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-kp8pg" event={"ID":"599418d3-6afa-46ab-9afa-659134f7ac94","Type":"ContainerStarted","Data":"0eac220e25f8e22926d096bb0696cdf5e682fcca6a8690c51159709ec83275d7"} Mar 18 08:53:54.280509 master-0 kubenswrapper[6976]: I0318 08:53:54.280382 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:54.280509 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:53:54.280509 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:53:54.280509 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:53:54.281242 master-0 kubenswrapper[6976]: I0318 08:53:54.280521 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:55.206109 master-0 kubenswrapper[6976]: I0318 08:53:55.206047 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" event={"ID":"15798f4d-8bcc-4e24-bb18-8dff1f4edf59","Type":"ContainerStarted","Data":"d0930d28d1fdb529c842e4f2ed66ad664859a2ae56191016c64121570fbab847"} Mar 18 08:53:55.208175 master-0 kubenswrapper[6976]: I0318 08:53:55.208137 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-kp8pg" event={"ID":"599418d3-6afa-46ab-9afa-659134f7ac94","Type":"ContainerStarted","Data":"01d6005fb4ea38c01fa059c2843f3b8485ad7446c0d025b8981f15e49d056206"} Mar 18 08:53:55.279290 master-0 kubenswrapper[6976]: I0318 08:53:55.279235 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:55.279290 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:53:55.279290 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:53:55.279290 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:53:55.279543 master-0 kubenswrapper[6976]: I0318 08:53:55.279295 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:56.221464 master-0 kubenswrapper[6976]: I0318 08:53:56.219847 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" event={"ID":"15798f4d-8bcc-4e24-bb18-8dff1f4edf59","Type":"ContainerStarted","Data":"e0589f3a08327bbf41679c4527e75e012e5204d1d0fcd4e5351f9475cac7955c"} Mar 18 08:53:56.280343 master-0 kubenswrapper[6976]: I0318 08:53:56.280246 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:56.280343 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:53:56.280343 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:53:56.280343 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:53:56.280343 master-0 kubenswrapper[6976]: I0318 08:53:56.280328 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:56.498888 master-0 kubenswrapper[6976]: I0318 08:53:56.498780 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-kp8pg" podStartSLOduration=6.609087103 podStartE2EDuration="7.498753766s" podCreationTimestamp="2026-03-18 08:53:49 +0000 UTC" firstStartedPulling="2026-03-18 08:53:50.289918518 +0000 UTC m=+329.875520113" lastFinishedPulling="2026-03-18 08:53:51.179585181 +0000 UTC m=+330.765186776" observedRunningTime="2026-03-18 08:53:56.496609071 +0000 UTC m=+336.082210696" watchObservedRunningTime="2026-03-18 08:53:56.498753766 +0000 UTC m=+336.084355391" Mar 18 08:53:56.614856 master-0 kubenswrapper[6976]: I0318 08:53:56.614798 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-7875f64c8-kmr8t"] Mar 18 08:53:56.615489 master-0 kubenswrapper[6976]: I0318 08:53:56.615458 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7875f64c8-kmr8t"] Mar 18 08:53:56.615650 master-0 kubenswrapper[6976]: I0318 08:53:56.615613 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:53:56.634593 master-0 kubenswrapper[6976]: I0318 08:53:56.629299 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-2gj3dpncb7vk4" Mar 18 08:53:56.634593 master-0 kubenswrapper[6976]: I0318 08:53:56.629537 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 18 08:53:56.634593 master-0 kubenswrapper[6976]: I0318 08:53:56.629722 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 18 08:53:56.634593 master-0 kubenswrapper[6976]: I0318 08:53:56.629760 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 18 08:53:56.634593 master-0 kubenswrapper[6976]: I0318 08:53:56.629853 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-9xv2f" Mar 18 08:53:56.634593 master-0 kubenswrapper[6976]: I0318 08:53:56.630011 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 18 08:53:56.765807 master-0 kubenswrapper[6976]: I0318 08:53:56.765700 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:53:56.765807 master-0 kubenswrapper[6976]: I0318 08:53:56.765759 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-client-ca-bundle\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:53:56.765807 master-0 kubenswrapper[6976]: I0318 08:53:56.765790 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brzfx\" (UniqueName: \"kubernetes.io/projected/87381a51-96e6-4e86-bdae-c8ac3fc7a039-kube-api-access-brzfx\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:53:56.766127 master-0 kubenswrapper[6976]: I0318 08:53:56.765816 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/87381a51-96e6-4e86-bdae-c8ac3fc7a039-audit-log\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:53:56.766127 master-0 kubenswrapper[6976]: I0318 08:53:56.765846 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-metrics-server-audit-profiles\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:53:56.766127 master-0 kubenswrapper[6976]: I0318 08:53:56.765908 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-client-certs\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:53:56.766127 master-0 kubenswrapper[6976]: I0318 08:53:56.765949 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-server-tls\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:53:56.867073 master-0 kubenswrapper[6976]: I0318 08:53:56.867023 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-client-certs\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:53:56.867365 master-0 kubenswrapper[6976]: I0318 08:53:56.867343 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-server-tls\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:53:56.867497 master-0 kubenswrapper[6976]: I0318 08:53:56.867476 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:53:56.867649 master-0 kubenswrapper[6976]: I0318 08:53:56.867628 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-client-ca-bundle\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:53:56.867776 master-0 kubenswrapper[6976]: I0318 08:53:56.867758 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brzfx\" (UniqueName: \"kubernetes.io/projected/87381a51-96e6-4e86-bdae-c8ac3fc7a039-kube-api-access-brzfx\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:53:56.867898 master-0 kubenswrapper[6976]: I0318 08:53:56.867881 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/87381a51-96e6-4e86-bdae-c8ac3fc7a039-audit-log\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:53:56.868031 master-0 kubenswrapper[6976]: I0318 08:53:56.868014 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-metrics-server-audit-profiles\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:53:56.868418 master-0 kubenswrapper[6976]: I0318 08:53:56.868380 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:53:56.868519 master-0 kubenswrapper[6976]: I0318 08:53:56.868491 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/87381a51-96e6-4e86-bdae-c8ac3fc7a039-audit-log\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:53:56.869247 master-0 kubenswrapper[6976]: I0318 08:53:56.869213 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-metrics-server-audit-profiles\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:53:56.870972 master-0 kubenswrapper[6976]: I0318 08:53:56.870900 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-server-tls\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:53:56.871066 master-0 kubenswrapper[6976]: I0318 08:53:56.871042 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-client-certs\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:53:56.872194 master-0 kubenswrapper[6976]: I0318 08:53:56.872172 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-client-ca-bundle\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:53:56.884148 master-0 kubenswrapper[6976]: I0318 08:53:56.884112 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brzfx\" (UniqueName: \"kubernetes.io/projected/87381a51-96e6-4e86-bdae-c8ac3fc7a039-kube-api-access-brzfx\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:53:56.957782 master-0 kubenswrapper[6976]: I0318 08:53:56.957724 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:53:57.234871 master-0 kubenswrapper[6976]: I0318 08:53:57.234773 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" event={"ID":"15798f4d-8bcc-4e24-bb18-8dff1f4edf59","Type":"ContainerStarted","Data":"df2b875ad286551cf62c1d13aae2442d21ad95772fc22fc744b70af2ad012c3f"} Mar 18 08:53:57.240986 master-0 kubenswrapper[6976]: I0318 08:53:57.240920 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" event={"ID":"2b59dbf5-0a61-4981-aed3-e73550615c4a","Type":"ContainerStarted","Data":"23ab3c06cdd68eec9c1e745d83247012c15db2a233b391bc0e4857019aef0c52"} Mar 18 08:53:57.267664 master-0 kubenswrapper[6976]: I0318 08:53:57.267524 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" podStartSLOduration=4.660826113 podStartE2EDuration="8.267420666s" podCreationTimestamp="2026-03-18 08:53:49 +0000 UTC" firstStartedPulling="2026-03-18 08:53:50.694878196 +0000 UTC m=+330.280479831" lastFinishedPulling="2026-03-18 08:53:54.301472789 +0000 UTC m=+333.887074384" observedRunningTime="2026-03-18 08:53:57.263722751 +0000 UTC m=+336.849324396" watchObservedRunningTime="2026-03-18 08:53:57.267420666 +0000 UTC m=+336.853022301" Mar 18 08:53:57.287529 master-0 kubenswrapper[6976]: I0318 08:53:57.287367 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:57.287529 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:53:57.287529 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:53:57.287529 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:53:57.287940 master-0 kubenswrapper[6976]: I0318 08:53:57.287474 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:57.305642 master-0 kubenswrapper[6976]: I0318 08:53:57.305517 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" podStartSLOduration=3.57275241 podStartE2EDuration="8.305491072s" podCreationTimestamp="2026-03-18 08:53:49 +0000 UTC" firstStartedPulling="2026-03-18 08:53:51.015898922 +0000 UTC m=+330.601500557" lastFinishedPulling="2026-03-18 08:53:55.748637594 +0000 UTC m=+335.334239219" observedRunningTime="2026-03-18 08:53:57.301469159 +0000 UTC m=+336.887070774" watchObservedRunningTime="2026-03-18 08:53:57.305491072 +0000 UTC m=+336.891092677" Mar 18 08:53:57.336101 master-0 kubenswrapper[6976]: I0318 08:53:57.335941 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7875f64c8-kmr8t"] Mar 18 08:53:57.340842 master-0 kubenswrapper[6976]: W0318 08:53:57.340788 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87381a51_96e6_4e86_bdae_c8ac3fc7a039.slice/crio-c44219a166b17d244712a88198d9aa0d215e4b99c0debae8766fc702ed86eb66 WatchSource:0}: Error finding container c44219a166b17d244712a88198d9aa0d215e4b99c0debae8766fc702ed86eb66: Status 404 returned error can't find the container with id c44219a166b17d244712a88198d9aa0d215e4b99c0debae8766fc702ed86eb66 Mar 18 08:53:57.620041 master-0 kubenswrapper[6976]: I0318 08:53:57.619928 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 18 08:53:58.249587 master-0 kubenswrapper[6976]: I0318 08:53:58.249487 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" event={"ID":"87381a51-96e6-4e86-bdae-c8ac3fc7a039","Type":"ContainerStarted","Data":"c44219a166b17d244712a88198d9aa0d215e4b99c0debae8766fc702ed86eb66"} Mar 18 08:53:58.279610 master-0 kubenswrapper[6976]: I0318 08:53:58.279530 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:58.279610 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:53:58.279610 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:53:58.279610 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:53:58.279942 master-0 kubenswrapper[6976]: I0318 08:53:58.279612 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:58.566797 master-0 kubenswrapper[6976]: I0318 08:53:58.565712 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 08:53:58.609468 master-0 kubenswrapper[6976]: I0318 08:53:58.609187 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=1.609159687 podStartE2EDuration="1.609159687s" podCreationTimestamp="2026-03-18 08:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:53:58.609002833 +0000 UTC m=+338.194604448" watchObservedRunningTime="2026-03-18 08:53:58.609159687 +0000 UTC m=+338.194761332" Mar 18 08:53:58.629098 master-0 kubenswrapper[6976]: I0318 08:53:58.629031 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 08:53:59.257693 master-0 kubenswrapper[6976]: I0318 08:53:59.257545 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" event={"ID":"87381a51-96e6-4e86-bdae-c8ac3fc7a039","Type":"ContainerStarted","Data":"81a151a3aa12b152f9071a9f499fc6c53ed0410a76702e645d7cd7db06bbf80b"} Mar 18 08:53:59.283321 master-0 kubenswrapper[6976]: I0318 08:53:59.283261 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:53:59.283321 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:53:59.283321 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:53:59.283321 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:53:59.283739 master-0 kubenswrapper[6976]: I0318 08:53:59.283322 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:53:59.283739 master-0 kubenswrapper[6976]: I0318 08:53:59.283468 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" podStartSLOduration=1.6423834259999999 podStartE2EDuration="3.283442055s" podCreationTimestamp="2026-03-18 08:53:56 +0000 UTC" firstStartedPulling="2026-03-18 08:53:57.34281651 +0000 UTC m=+336.928418115" lastFinishedPulling="2026-03-18 08:53:58.983875149 +0000 UTC m=+338.569476744" observedRunningTime="2026-03-18 08:53:59.279665668 +0000 UTC m=+338.865267273" watchObservedRunningTime="2026-03-18 08:53:59.283442055 +0000 UTC m=+338.869043650" Mar 18 08:54:00.280245 master-0 kubenswrapper[6976]: I0318 08:54:00.280175 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:00.280245 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:00.280245 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:00.280245 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:00.280877 master-0 kubenswrapper[6976]: I0318 08:54:00.280272 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:01.280686 master-0 kubenswrapper[6976]: I0318 08:54:01.280517 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:01.280686 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:01.280686 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:01.280686 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:01.281787 master-0 kubenswrapper[6976]: I0318 08:54:01.280692 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:02.281339 master-0 kubenswrapper[6976]: I0318 08:54:02.281185 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:02.281339 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:02.281339 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:02.281339 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:02.281339 master-0 kubenswrapper[6976]: I0318 08:54:02.281246 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:03.279947 master-0 kubenswrapper[6976]: I0318 08:54:03.279843 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:03.279947 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:03.279947 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:03.279947 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:03.279947 master-0 kubenswrapper[6976]: I0318 08:54:03.279926 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:04.280550 master-0 kubenswrapper[6976]: I0318 08:54:04.280453 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:04.280550 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:04.280550 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:04.280550 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:04.280550 master-0 kubenswrapper[6976]: I0318 08:54:04.280535 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:05.280697 master-0 kubenswrapper[6976]: I0318 08:54:05.280623 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:05.280697 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:05.280697 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:05.280697 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:05.281586 master-0 kubenswrapper[6976]: I0318 08:54:05.280701 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:06.280780 master-0 kubenswrapper[6976]: I0318 08:54:06.280671 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:06.280780 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:06.280780 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:06.280780 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:06.281867 master-0 kubenswrapper[6976]: I0318 08:54:06.280802 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:07.279879 master-0 kubenswrapper[6976]: I0318 08:54:07.279806 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:07.279879 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:07.279879 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:07.279879 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:07.280232 master-0 kubenswrapper[6976]: I0318 08:54:07.279886 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:08.280389 master-0 kubenswrapper[6976]: I0318 08:54:08.280325 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:08.280389 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:08.280389 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:08.280389 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:08.281282 master-0 kubenswrapper[6976]: I0318 08:54:08.281243 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:09.280057 master-0 kubenswrapper[6976]: I0318 08:54:09.279994 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:09.280057 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:09.280057 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:09.280057 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:09.280057 master-0 kubenswrapper[6976]: I0318 08:54:09.280070 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:10.280699 master-0 kubenswrapper[6976]: I0318 08:54:10.280544 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:10.280699 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:10.280699 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:10.280699 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:10.281822 master-0 kubenswrapper[6976]: I0318 08:54:10.280695 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:11.280241 master-0 kubenswrapper[6976]: I0318 08:54:11.280124 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:11.280241 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:11.280241 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:11.280241 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:11.280636 master-0 kubenswrapper[6976]: I0318 08:54:11.280280 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:12.279998 master-0 kubenswrapper[6976]: I0318 08:54:12.279881 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:12.279998 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:12.279998 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:12.279998 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:12.280950 master-0 kubenswrapper[6976]: I0318 08:54:12.280022 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:13.280908 master-0 kubenswrapper[6976]: I0318 08:54:13.280799 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:13.280908 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:13.280908 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:13.280908 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:13.281871 master-0 kubenswrapper[6976]: I0318 08:54:13.280931 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:14.280450 master-0 kubenswrapper[6976]: I0318 08:54:14.280380 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:14.280450 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:14.280450 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:14.280450 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:14.280923 master-0 kubenswrapper[6976]: I0318 08:54:14.280469 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:15.280900 master-0 kubenswrapper[6976]: I0318 08:54:15.280816 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:15.280900 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:15.280900 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:15.280900 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:15.282141 master-0 kubenswrapper[6976]: I0318 08:54:15.280923 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:16.280863 master-0 kubenswrapper[6976]: I0318 08:54:16.280757 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:16.280863 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:16.280863 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:16.280863 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:16.280863 master-0 kubenswrapper[6976]: I0318 08:54:16.280853 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:16.958926 master-0 kubenswrapper[6976]: I0318 08:54:16.958369 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:54:16.959358 master-0 kubenswrapper[6976]: I0318 08:54:16.959281 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:54:17.280752 master-0 kubenswrapper[6976]: I0318 08:54:17.280511 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:17.280752 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:17.280752 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:17.280752 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:17.280752 master-0 kubenswrapper[6976]: I0318 08:54:17.280665 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:18.280712 master-0 kubenswrapper[6976]: I0318 08:54:18.280633 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:18.280712 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:18.280712 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:18.280712 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:18.281790 master-0 kubenswrapper[6976]: I0318 08:54:18.280744 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:19.279379 master-0 kubenswrapper[6976]: I0318 08:54:19.279320 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:19.279379 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:19.279379 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:19.279379 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:19.279706 master-0 kubenswrapper[6976]: I0318 08:54:19.279402 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:20.280626 master-0 kubenswrapper[6976]: I0318 08:54:20.280492 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:20.280626 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:20.280626 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:20.280626 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:20.281751 master-0 kubenswrapper[6976]: I0318 08:54:20.280662 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:21.279888 master-0 kubenswrapper[6976]: I0318 08:54:21.279803 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:21.279888 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:21.279888 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:21.279888 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:21.280202 master-0 kubenswrapper[6976]: I0318 08:54:21.279898 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:22.068618 master-0 kubenswrapper[6976]: I0318 08:54:22.067790 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-226gc"] Mar 18 08:54:22.069901 master-0 kubenswrapper[6976]: I0318 08:54:22.069228 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-226gc" Mar 18 08:54:22.072886 master-0 kubenswrapper[6976]: I0318 08:54:22.072723 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 18 08:54:22.073070 master-0 kubenswrapper[6976]: I0318 08:54:22.073038 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 18 08:54:22.094150 master-0 kubenswrapper[6976]: I0318 08:54:22.073983 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 18 08:54:22.094150 master-0 kubenswrapper[6976]: I0318 08:54:22.074866 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-pfhv7" Mar 18 08:54:22.094150 master-0 kubenswrapper[6976]: I0318 08:54:22.091417 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-226gc"] Mar 18 08:54:22.147799 master-0 kubenswrapper[6976]: I0318 08:54:22.147706 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd-cert\") pod \"ingress-canary-226gc\" (UID: \"9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd\") " pod="openshift-ingress-canary/ingress-canary-226gc" Mar 18 08:54:22.148086 master-0 kubenswrapper[6976]: I0318 08:54:22.148035 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5nwv\" (UniqueName: \"kubernetes.io/projected/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd-kube-api-access-j5nwv\") pod \"ingress-canary-226gc\" (UID: \"9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd\") " pod="openshift-ingress-canary/ingress-canary-226gc" Mar 18 08:54:22.250089 master-0 kubenswrapper[6976]: I0318 08:54:22.250026 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd-cert\") pod \"ingress-canary-226gc\" (UID: \"9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd\") " pod="openshift-ingress-canary/ingress-canary-226gc" Mar 18 08:54:22.250089 master-0 kubenswrapper[6976]: I0318 08:54:22.250100 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5nwv\" (UniqueName: \"kubernetes.io/projected/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd-kube-api-access-j5nwv\") pod \"ingress-canary-226gc\" (UID: \"9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd\") " pod="openshift-ingress-canary/ingress-canary-226gc" Mar 18 08:54:22.250483 master-0 kubenswrapper[6976]: E0318 08:54:22.250351 6976 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Mar 18 08:54:22.250483 master-0 kubenswrapper[6976]: E0318 08:54:22.250472 6976 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd-cert podName:9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd nodeName:}" failed. No retries permitted until 2026-03-18 08:54:22.750444557 +0000 UTC m=+362.336046192 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd-cert") pod "ingress-canary-226gc" (UID: "9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd") : secret "canary-serving-cert" not found Mar 18 08:54:22.275894 master-0 kubenswrapper[6976]: I0318 08:54:22.272318 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5nwv\" (UniqueName: \"kubernetes.io/projected/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd-kube-api-access-j5nwv\") pod \"ingress-canary-226gc\" (UID: \"9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd\") " pod="openshift-ingress-canary/ingress-canary-226gc" Mar 18 08:54:22.280346 master-0 kubenswrapper[6976]: I0318 08:54:22.280246 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:22.280346 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:22.280346 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:22.280346 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:22.280728 master-0 kubenswrapper[6976]: I0318 08:54:22.280372 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:22.461147 master-0 kubenswrapper[6976]: I0318 08:54:22.461101 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-4cxfh_bf7a3329-a04c-4b58-9364-b907c00cbe08/ingress-operator/1.log" Mar 18 08:54:22.462440 master-0 kubenswrapper[6976]: I0318 08:54:22.462411 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-4cxfh_bf7a3329-a04c-4b58-9364-b907c00cbe08/ingress-operator/0.log" Mar 18 08:54:22.462502 master-0 kubenswrapper[6976]: I0318 08:54:22.462461 6976 generic.go:334] "Generic (PLEG): container finished" podID="bf7a3329-a04c-4b58-9364-b907c00cbe08" containerID="b5f7cf693149b169e2ca2431c906635fd55e0044ca6a526820ae0cf9a719f2b3" exitCode=1 Mar 18 08:54:22.462537 master-0 kubenswrapper[6976]: I0318 08:54:22.462496 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" event={"ID":"bf7a3329-a04c-4b58-9364-b907c00cbe08","Type":"ContainerDied","Data":"b5f7cf693149b169e2ca2431c906635fd55e0044ca6a526820ae0cf9a719f2b3"} Mar 18 08:54:22.462537 master-0 kubenswrapper[6976]: I0318 08:54:22.462531 6976 scope.go:117] "RemoveContainer" containerID="9d25c9c9b5ced91c32a1b9dd7e48ce6b3235062e8dd7fa065d776452831b8b1b" Mar 18 08:54:22.463043 master-0 kubenswrapper[6976]: I0318 08:54:22.463012 6976 scope.go:117] "RemoveContainer" containerID="b5f7cf693149b169e2ca2431c906635fd55e0044ca6a526820ae0cf9a719f2b3" Mar 18 08:54:22.463297 master-0 kubenswrapper[6976]: E0318 08:54:22.463257 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-4cxfh_openshift-ingress-operator(bf7a3329-a04c-4b58-9364-b907c00cbe08)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" podUID="bf7a3329-a04c-4b58-9364-b907c00cbe08" Mar 18 08:54:22.756668 master-0 kubenswrapper[6976]: I0318 08:54:22.756429 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd-cert\") pod \"ingress-canary-226gc\" (UID: \"9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd\") " pod="openshift-ingress-canary/ingress-canary-226gc" Mar 18 08:54:22.761310 master-0 kubenswrapper[6976]: I0318 08:54:22.761259 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd-cert\") pod \"ingress-canary-226gc\" (UID: \"9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd\") " pod="openshift-ingress-canary/ingress-canary-226gc" Mar 18 08:54:23.027665 master-0 kubenswrapper[6976]: I0318 08:54:23.027452 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-226gc" Mar 18 08:54:23.280598 master-0 kubenswrapper[6976]: I0318 08:54:23.280413 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:23.280598 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:23.280598 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:23.280598 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:23.280598 master-0 kubenswrapper[6976]: I0318 08:54:23.280475 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:23.470787 master-0 kubenswrapper[6976]: I0318 08:54:23.470672 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-4cxfh_bf7a3329-a04c-4b58-9364-b907c00cbe08/ingress-operator/1.log" Mar 18 08:54:23.510745 master-0 kubenswrapper[6976]: I0318 08:54:23.510690 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-226gc"] Mar 18 08:54:23.516604 master-0 kubenswrapper[6976]: W0318 08:54:23.516304 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d66a9b2_7f9c_45bd_a793_b2ce9cd571cd.slice/crio-e917de8a6a8f9b1b1c6c325604e10e91f09c06b26f45f002fa62fa96185aa27a WatchSource:0}: Error finding container e917de8a6a8f9b1b1c6c325604e10e91f09c06b26f45f002fa62fa96185aa27a: Status 404 returned error can't find the container with id e917de8a6a8f9b1b1c6c325604e10e91f09c06b26f45f002fa62fa96185aa27a Mar 18 08:54:24.279714 master-0 kubenswrapper[6976]: I0318 08:54:24.279611 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:24.279714 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:24.279714 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:24.279714 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:24.280245 master-0 kubenswrapper[6976]: I0318 08:54:24.279715 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:24.481470 master-0 kubenswrapper[6976]: I0318 08:54:24.481382 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-226gc" event={"ID":"9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd","Type":"ContainerStarted","Data":"fc07358d6814ad004434d737f62c997ba15f06dae8ab1e9b0b6eb5c5b2da2009"} Mar 18 08:54:24.481470 master-0 kubenswrapper[6976]: I0318 08:54:24.481465 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-226gc" event={"ID":"9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd","Type":"ContainerStarted","Data":"e917de8a6a8f9b1b1c6c325604e10e91f09c06b26f45f002fa62fa96185aa27a"} Mar 18 08:54:24.508682 master-0 kubenswrapper[6976]: I0318 08:54:24.508506 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-226gc" podStartSLOduration=2.508478028 podStartE2EDuration="2.508478028s" podCreationTimestamp="2026-03-18 08:54:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:54:24.506722874 +0000 UTC m=+364.092324509" watchObservedRunningTime="2026-03-18 08:54:24.508478028 +0000 UTC m=+364.094079663" Mar 18 08:54:25.279721 master-0 kubenswrapper[6976]: I0318 08:54:25.279652 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:25.279721 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:25.279721 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:25.279721 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:25.280084 master-0 kubenswrapper[6976]: I0318 08:54:25.279750 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:26.280725 master-0 kubenswrapper[6976]: I0318 08:54:26.280631 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:26.280725 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:26.280725 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:26.280725 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:26.281910 master-0 kubenswrapper[6976]: I0318 08:54:26.280730 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:27.279671 master-0 kubenswrapper[6976]: I0318 08:54:27.279591 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:27.279671 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:27.279671 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:27.279671 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:27.280062 master-0 kubenswrapper[6976]: I0318 08:54:27.280030 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:28.279514 master-0 kubenswrapper[6976]: I0318 08:54:28.279446 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:28.279514 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:28.279514 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:28.279514 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:28.280912 master-0 kubenswrapper[6976]: I0318 08:54:28.280852 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:29.280524 master-0 kubenswrapper[6976]: I0318 08:54:29.280447 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:29.280524 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:29.280524 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:29.280524 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:29.281691 master-0 kubenswrapper[6976]: I0318 08:54:29.280525 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:30.280734 master-0 kubenswrapper[6976]: I0318 08:54:30.280662 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:30.280734 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:30.280734 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:30.280734 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:30.281734 master-0 kubenswrapper[6976]: I0318 08:54:30.280747 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:31.280687 master-0 kubenswrapper[6976]: I0318 08:54:31.280603 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:31.280687 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:31.280687 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:31.280687 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:31.281886 master-0 kubenswrapper[6976]: I0318 08:54:31.280717 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:32.278891 master-0 kubenswrapper[6976]: I0318 08:54:32.278836 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:32.278891 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:32.278891 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:32.278891 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:32.279177 master-0 kubenswrapper[6976]: I0318 08:54:32.278901 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:33.279683 master-0 kubenswrapper[6976]: I0318 08:54:33.279620 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:33.279683 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:33.279683 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:33.279683 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:33.280733 master-0 kubenswrapper[6976]: I0318 08:54:33.279693 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:34.280590 master-0 kubenswrapper[6976]: I0318 08:54:34.280494 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:34.280590 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:34.280590 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:34.280590 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:34.280590 master-0 kubenswrapper[6976]: I0318 08:54:34.280582 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:35.280423 master-0 kubenswrapper[6976]: I0318 08:54:35.280353 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:35.280423 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:35.280423 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:35.280423 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:35.280962 master-0 kubenswrapper[6976]: I0318 08:54:35.280456 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:35.598937 master-0 kubenswrapper[6976]: I0318 08:54:35.598799 6976 scope.go:117] "RemoveContainer" containerID="b5f7cf693149b169e2ca2431c906635fd55e0044ca6a526820ae0cf9a719f2b3" Mar 18 08:54:36.280839 master-0 kubenswrapper[6976]: I0318 08:54:36.280739 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:36.280839 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:36.280839 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:36.280839 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:36.281878 master-0 kubenswrapper[6976]: I0318 08:54:36.280867 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:36.587172 master-0 kubenswrapper[6976]: I0318 08:54:36.587068 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-4cxfh_bf7a3329-a04c-4b58-9364-b907c00cbe08/ingress-operator/1.log" Mar 18 08:54:36.587538 master-0 kubenswrapper[6976]: I0318 08:54:36.587503 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" event={"ID":"bf7a3329-a04c-4b58-9364-b907c00cbe08","Type":"ContainerStarted","Data":"4288f0a281b19c9f93fcb8b8d7e439e4c34597fa12a429e7eb6e155e31d87b19"} Mar 18 08:54:36.967816 master-0 kubenswrapper[6976]: I0318 08:54:36.967705 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:54:36.974257 master-0 kubenswrapper[6976]: I0318 08:54:36.974184 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 08:54:37.280534 master-0 kubenswrapper[6976]: I0318 08:54:37.280357 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:37.280534 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:37.280534 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:37.280534 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:37.280534 master-0 kubenswrapper[6976]: I0318 08:54:37.280432 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:38.279404 master-0 kubenswrapper[6976]: I0318 08:54:38.279345 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:38.279404 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:38.279404 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:38.279404 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:38.279683 master-0 kubenswrapper[6976]: I0318 08:54:38.279427 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:39.280764 master-0 kubenswrapper[6976]: I0318 08:54:39.280682 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:39.280764 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:39.280764 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:39.280764 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:39.281956 master-0 kubenswrapper[6976]: I0318 08:54:39.280785 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:40.280537 master-0 kubenswrapper[6976]: I0318 08:54:40.280470 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:40.280537 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:40.280537 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:40.280537 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:40.281797 master-0 kubenswrapper[6976]: I0318 08:54:40.280548 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:41.280239 master-0 kubenswrapper[6976]: I0318 08:54:41.280173 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:41.280239 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:41.280239 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:41.280239 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:41.280655 master-0 kubenswrapper[6976]: I0318 08:54:41.280289 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:42.279849 master-0 kubenswrapper[6976]: I0318 08:54:42.279795 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:42.279849 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:42.279849 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:42.279849 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:42.280471 master-0 kubenswrapper[6976]: I0318 08:54:42.279863 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:43.280501 master-0 kubenswrapper[6976]: I0318 08:54:43.280394 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:43.280501 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:43.280501 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:43.280501 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:43.280501 master-0 kubenswrapper[6976]: I0318 08:54:43.280497 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:44.280537 master-0 kubenswrapper[6976]: I0318 08:54:44.280421 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:44.280537 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:44.280537 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:44.280537 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:44.281490 master-0 kubenswrapper[6976]: I0318 08:54:44.280540 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:45.285067 master-0 kubenswrapper[6976]: I0318 08:54:45.284890 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:45.285067 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:45.285067 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:45.285067 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:45.285067 master-0 kubenswrapper[6976]: I0318 08:54:45.285054 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:46.280805 master-0 kubenswrapper[6976]: I0318 08:54:46.280664 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:46.280805 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:46.280805 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:46.280805 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:46.280805 master-0 kubenswrapper[6976]: I0318 08:54:46.280773 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:47.282443 master-0 kubenswrapper[6976]: I0318 08:54:47.281246 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:47.282443 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:47.282443 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:47.282443 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:47.282443 master-0 kubenswrapper[6976]: I0318 08:54:47.281336 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:48.280607 master-0 kubenswrapper[6976]: I0318 08:54:48.280466 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:48.280607 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:48.280607 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:48.280607 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:48.281198 master-0 kubenswrapper[6976]: I0318 08:54:48.280604 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:49.280675 master-0 kubenswrapper[6976]: I0318 08:54:49.280553 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:49.280675 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:49.280675 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:49.280675 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:49.281731 master-0 kubenswrapper[6976]: I0318 08:54:49.280680 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:50.281059 master-0 kubenswrapper[6976]: I0318 08:54:50.281000 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:50.281059 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:50.281059 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:50.281059 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:50.282047 master-0 kubenswrapper[6976]: I0318 08:54:50.281721 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:51.280219 master-0 kubenswrapper[6976]: I0318 08:54:51.280142 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:51.280219 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:51.280219 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:51.280219 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:51.280875 master-0 kubenswrapper[6976]: I0318 08:54:51.280234 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:52.280500 master-0 kubenswrapper[6976]: I0318 08:54:52.280441 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:52.280500 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:52.280500 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:52.280500 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:52.281228 master-0 kubenswrapper[6976]: I0318 08:54:52.280518 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:53.280355 master-0 kubenswrapper[6976]: I0318 08:54:53.280266 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:53.280355 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:53.280355 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:53.280355 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:53.280355 master-0 kubenswrapper[6976]: I0318 08:54:53.280355 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:54.281061 master-0 kubenswrapper[6976]: I0318 08:54:54.280900 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:54.281061 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:54.281061 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:54.281061 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:54.281061 master-0 kubenswrapper[6976]: I0318 08:54:54.280998 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:55.279608 master-0 kubenswrapper[6976]: I0318 08:54:55.279548 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:55.279608 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:55.279608 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:55.279608 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:55.280033 master-0 kubenswrapper[6976]: I0318 08:54:55.279615 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:56.279705 master-0 kubenswrapper[6976]: I0318 08:54:56.279621 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:56.279705 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:56.279705 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:56.279705 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:56.279705 master-0 kubenswrapper[6976]: I0318 08:54:56.279708 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:57.279705 master-0 kubenswrapper[6976]: I0318 08:54:57.279562 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:57.279705 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:57.279705 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:57.279705 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:57.279705 master-0 kubenswrapper[6976]: I0318 08:54:57.279643 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:58.279608 master-0 kubenswrapper[6976]: I0318 08:54:58.279439 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:58.279608 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:58.279608 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:58.279608 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:58.279947 master-0 kubenswrapper[6976]: I0318 08:54:58.279611 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:54:59.279651 master-0 kubenswrapper[6976]: I0318 08:54:59.279556 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:54:59.279651 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:54:59.279651 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:54:59.279651 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:54:59.280734 master-0 kubenswrapper[6976]: I0318 08:54:59.279663 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:00.279721 master-0 kubenswrapper[6976]: I0318 08:55:00.279640 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:00.279721 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:00.279721 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:00.279721 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:00.280321 master-0 kubenswrapper[6976]: I0318 08:55:00.279723 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:01.280425 master-0 kubenswrapper[6976]: I0318 08:55:01.280336 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:01.280425 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:01.280425 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:01.280425 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:01.281455 master-0 kubenswrapper[6976]: I0318 08:55:01.280439 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:02.280841 master-0 kubenswrapper[6976]: I0318 08:55:02.280770 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:02.280841 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:02.280841 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:02.280841 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:02.282073 master-0 kubenswrapper[6976]: I0318 08:55:02.282021 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:03.280045 master-0 kubenswrapper[6976]: I0318 08:55:03.279967 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:03.280045 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:03.280045 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:03.280045 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:03.280045 master-0 kubenswrapper[6976]: I0318 08:55:03.280036 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:04.279343 master-0 kubenswrapper[6976]: I0318 08:55:04.279284 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:04.279343 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:04.279343 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:04.279343 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:04.280064 master-0 kubenswrapper[6976]: I0318 08:55:04.279347 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:05.281032 master-0 kubenswrapper[6976]: I0318 08:55:05.280915 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:05.281032 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:05.281032 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:05.281032 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:05.281032 master-0 kubenswrapper[6976]: I0318 08:55:05.281009 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:06.280353 master-0 kubenswrapper[6976]: I0318 08:55:06.280284 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:06.280353 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:06.280353 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:06.280353 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:06.280662 master-0 kubenswrapper[6976]: I0318 08:55:06.280398 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:07.280906 master-0 kubenswrapper[6976]: I0318 08:55:07.280810 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:07.280906 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:07.280906 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:07.280906 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:07.281916 master-0 kubenswrapper[6976]: I0318 08:55:07.280923 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:08.281295 master-0 kubenswrapper[6976]: I0318 08:55:08.281227 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:08.281295 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:08.281295 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:08.281295 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:08.282250 master-0 kubenswrapper[6976]: I0318 08:55:08.281893 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:09.279854 master-0 kubenswrapper[6976]: I0318 08:55:09.279788 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:09.279854 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:09.279854 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:09.279854 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:09.280295 master-0 kubenswrapper[6976]: I0318 08:55:09.279896 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:10.279681 master-0 kubenswrapper[6976]: I0318 08:55:10.279545 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:10.279681 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:10.279681 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:10.279681 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:10.280613 master-0 kubenswrapper[6976]: I0318 08:55:10.279721 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:11.279920 master-0 kubenswrapper[6976]: I0318 08:55:11.279866 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:11.279920 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:11.279920 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:11.279920 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:11.280697 master-0 kubenswrapper[6976]: I0318 08:55:11.279928 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:12.279216 master-0 kubenswrapper[6976]: I0318 08:55:12.279162 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:12.279216 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:12.279216 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:12.279216 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:12.279554 master-0 kubenswrapper[6976]: I0318 08:55:12.279224 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:13.279644 master-0 kubenswrapper[6976]: I0318 08:55:13.279507 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:13.279644 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:13.279644 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:13.279644 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:13.280742 master-0 kubenswrapper[6976]: I0318 08:55:13.279674 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:14.280746 master-0 kubenswrapper[6976]: I0318 08:55:14.280626 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:14.280746 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:14.280746 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:14.280746 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:14.280746 master-0 kubenswrapper[6976]: I0318 08:55:14.280749 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:15.280620 master-0 kubenswrapper[6976]: I0318 08:55:15.280503 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:15.280620 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:15.280620 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:15.280620 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:15.281905 master-0 kubenswrapper[6976]: I0318 08:55:15.280691 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:16.280368 master-0 kubenswrapper[6976]: I0318 08:55:16.280310 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:16.280368 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:16.280368 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:16.280368 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:16.280816 master-0 kubenswrapper[6976]: I0318 08:55:16.280392 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:17.281676 master-0 kubenswrapper[6976]: I0318 08:55:17.281622 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:17.281676 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:17.281676 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:17.281676 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:17.282780 master-0 kubenswrapper[6976]: I0318 08:55:17.281686 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:18.280055 master-0 kubenswrapper[6976]: I0318 08:55:18.279973 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:18.280055 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:18.280055 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:18.280055 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:18.280522 master-0 kubenswrapper[6976]: I0318 08:55:18.280065 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:19.280012 master-0 kubenswrapper[6976]: I0318 08:55:19.279953 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:19.280012 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:19.280012 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:19.280012 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:19.280650 master-0 kubenswrapper[6976]: I0318 08:55:19.280028 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:20.280784 master-0 kubenswrapper[6976]: I0318 08:55:20.280721 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:20.280784 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:20.280784 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:20.280784 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:20.282103 master-0 kubenswrapper[6976]: I0318 08:55:20.280797 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:21.279194 master-0 kubenswrapper[6976]: I0318 08:55:21.279145 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:21.279194 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:21.279194 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:21.279194 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:21.279580 master-0 kubenswrapper[6976]: I0318 08:55:21.279213 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:22.280092 master-0 kubenswrapper[6976]: I0318 08:55:22.280018 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:22.280092 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:22.280092 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:22.280092 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:22.280913 master-0 kubenswrapper[6976]: I0318 08:55:22.280112 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:23.279637 master-0 kubenswrapper[6976]: I0318 08:55:23.279549 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:23.279637 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:23.279637 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:23.279637 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:23.279904 master-0 kubenswrapper[6976]: I0318 08:55:23.279649 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:24.279867 master-0 kubenswrapper[6976]: I0318 08:55:24.279813 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:24.279867 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:24.279867 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:24.279867 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:24.280411 master-0 kubenswrapper[6976]: I0318 08:55:24.279879 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:25.732248 master-0 kubenswrapper[6976]: I0318 08:55:25.732184 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:25.732248 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:25.732248 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:25.732248 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:25.733010 master-0 kubenswrapper[6976]: I0318 08:55:25.732260 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:26.280169 master-0 kubenswrapper[6976]: I0318 08:55:26.280078 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:26.280169 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:26.280169 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:26.280169 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:26.280169 master-0 kubenswrapper[6976]: I0318 08:55:26.280163 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:27.281237 master-0 kubenswrapper[6976]: I0318 08:55:27.281152 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:27.281237 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:27.281237 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:27.281237 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:27.282487 master-0 kubenswrapper[6976]: I0318 08:55:27.281258 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:28.280636 master-0 kubenswrapper[6976]: I0318 08:55:28.280392 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:28.280636 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:28.280636 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:28.280636 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:28.280636 master-0 kubenswrapper[6976]: I0318 08:55:28.280477 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:29.280536 master-0 kubenswrapper[6976]: I0318 08:55:29.280454 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:29.280536 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:29.280536 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:29.280536 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:29.281548 master-0 kubenswrapper[6976]: I0318 08:55:29.280558 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:30.280508 master-0 kubenswrapper[6976]: I0318 08:55:30.280434 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:30.280508 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:30.280508 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:30.280508 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:30.281240 master-0 kubenswrapper[6976]: I0318 08:55:30.280537 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:31.280134 master-0 kubenswrapper[6976]: I0318 08:55:31.280064 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:31.280134 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:31.280134 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:31.280134 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:31.281778 master-0 kubenswrapper[6976]: I0318 08:55:31.281729 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:32.280678 master-0 kubenswrapper[6976]: I0318 08:55:32.280523 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:32.280678 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:32.280678 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:32.280678 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:32.281838 master-0 kubenswrapper[6976]: I0318 08:55:32.280710 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:33.280984 master-0 kubenswrapper[6976]: I0318 08:55:33.280899 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:33.280984 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:33.280984 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:33.280984 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:33.280984 master-0 kubenswrapper[6976]: I0318 08:55:33.280963 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:34.281131 master-0 kubenswrapper[6976]: I0318 08:55:34.281039 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:34.281131 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:34.281131 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:34.281131 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:34.282485 master-0 kubenswrapper[6976]: I0318 08:55:34.281135 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:35.281395 master-0 kubenswrapper[6976]: I0318 08:55:35.281317 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:35.281395 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:35.281395 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:35.281395 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:35.282090 master-0 kubenswrapper[6976]: I0318 08:55:35.281432 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:36.280783 master-0 kubenswrapper[6976]: I0318 08:55:36.280708 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:36.280783 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:36.280783 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:36.280783 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:36.281334 master-0 kubenswrapper[6976]: I0318 08:55:36.280804 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:37.281024 master-0 kubenswrapper[6976]: I0318 08:55:37.280953 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:37.281024 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:37.281024 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:37.281024 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:37.282241 master-0 kubenswrapper[6976]: I0318 08:55:37.282183 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:38.280241 master-0 kubenswrapper[6976]: I0318 08:55:38.280193 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:38.280241 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:38.280241 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:38.280241 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:38.280634 master-0 kubenswrapper[6976]: I0318 08:55:38.280594 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:39.281293 master-0 kubenswrapper[6976]: I0318 08:55:39.281206 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:39.281293 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:39.281293 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:39.281293 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:39.281927 master-0 kubenswrapper[6976]: I0318 08:55:39.281326 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:40.280917 master-0 kubenswrapper[6976]: I0318 08:55:40.280848 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:40.280917 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:40.280917 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:40.280917 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:40.281296 master-0 kubenswrapper[6976]: I0318 08:55:40.280947 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:41.281071 master-0 kubenswrapper[6976]: I0318 08:55:41.280980 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:41.281071 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:41.281071 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:41.281071 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:41.281675 master-0 kubenswrapper[6976]: I0318 08:55:41.281099 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:42.280685 master-0 kubenswrapper[6976]: I0318 08:55:42.280589 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:55:42.280685 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:55:42.280685 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:55:42.280685 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:55:42.280685 master-0 kubenswrapper[6976]: I0318 08:55:42.280683 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:55:42.280685 master-0 kubenswrapper[6976]: I0318 08:55:42.280736 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:55:42.282268 master-0 kubenswrapper[6976]: I0318 08:55:42.281631 6976 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"ea2c5251f8b00aeeac7b68834229738af66c558b5a20fbe3cc0b6efb0ce7e30a"} pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" containerMessage="Container router failed startup probe, will be restarted" Mar 18 08:55:42.282268 master-0 kubenswrapper[6976]: I0318 08:55:42.281668 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" containerID="cri-o://ea2c5251f8b00aeeac7b68834229738af66c558b5a20fbe3cc0b6efb0ce7e30a" gracePeriod=3600 Mar 18 08:56:29.205097 master-0 kubenswrapper[6976]: I0318 08:56:29.204976 6976 generic.go:334] "Generic (PLEG): container finished" podID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerID="ea2c5251f8b00aeeac7b68834229738af66c558b5a20fbe3cc0b6efb0ce7e30a" exitCode=0 Mar 18 08:56:29.205097 master-0 kubenswrapper[6976]: I0318 08:56:29.205068 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" event={"ID":"93cb5ef1-e8f1-4d11-8c93-1abf24626176","Type":"ContainerDied","Data":"ea2c5251f8b00aeeac7b68834229738af66c558b5a20fbe3cc0b6efb0ce7e30a"} Mar 18 08:56:29.206149 master-0 kubenswrapper[6976]: I0318 08:56:29.205120 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" event={"ID":"93cb5ef1-e8f1-4d11-8c93-1abf24626176","Type":"ContainerStarted","Data":"8822d8d1cd61ab70d73bc23715778ff88e202eedade5838abd00a7ee1f05085e"} Mar 18 08:56:29.277318 master-0 kubenswrapper[6976]: I0318 08:56:29.277258 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:56:29.281008 master-0 kubenswrapper[6976]: I0318 08:56:29.280953 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:29.281008 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:29.281008 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:29.281008 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:29.281507 master-0 kubenswrapper[6976]: I0318 08:56:29.281032 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:30.280307 master-0 kubenswrapper[6976]: I0318 08:56:30.280203 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:30.280307 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:30.280307 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:30.280307 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:30.281486 master-0 kubenswrapper[6976]: I0318 08:56:30.280367 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:31.281043 master-0 kubenswrapper[6976]: I0318 08:56:31.280910 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:31.281043 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:31.281043 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:31.281043 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:31.281043 master-0 kubenswrapper[6976]: I0318 08:56:31.281028 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:32.281660 master-0 kubenswrapper[6976]: I0318 08:56:32.281553 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:32.281660 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:32.281660 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:32.281660 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:32.283259 master-0 kubenswrapper[6976]: I0318 08:56:32.281668 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:33.281295 master-0 kubenswrapper[6976]: I0318 08:56:33.281207 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:33.281295 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:33.281295 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:33.281295 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:33.281633 master-0 kubenswrapper[6976]: I0318 08:56:33.281304 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:34.281669 master-0 kubenswrapper[6976]: I0318 08:56:34.281549 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:34.281669 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:34.281669 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:34.281669 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:34.281669 master-0 kubenswrapper[6976]: I0318 08:56:34.281661 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:35.277367 master-0 kubenswrapper[6976]: I0318 08:56:35.277311 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:56:35.280905 master-0 kubenswrapper[6976]: I0318 08:56:35.280851 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:35.280905 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:35.280905 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:35.280905 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:35.281185 master-0 kubenswrapper[6976]: I0318 08:56:35.280952 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:36.280320 master-0 kubenswrapper[6976]: I0318 08:56:36.280209 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:36.280320 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:36.280320 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:36.280320 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:36.280320 master-0 kubenswrapper[6976]: I0318 08:56:36.280304 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:37.273150 master-0 kubenswrapper[6976]: I0318 08:56:37.273050 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-4cxfh_bf7a3329-a04c-4b58-9364-b907c00cbe08/ingress-operator/2.log" Mar 18 08:56:37.274206 master-0 kubenswrapper[6976]: I0318 08:56:37.274152 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-4cxfh_bf7a3329-a04c-4b58-9364-b907c00cbe08/ingress-operator/1.log" Mar 18 08:56:37.274838 master-0 kubenswrapper[6976]: I0318 08:56:37.274783 6976 generic.go:334] "Generic (PLEG): container finished" podID="bf7a3329-a04c-4b58-9364-b907c00cbe08" containerID="4288f0a281b19c9f93fcb8b8d7e439e4c34597fa12a429e7eb6e155e31d87b19" exitCode=1 Mar 18 08:56:37.274948 master-0 kubenswrapper[6976]: I0318 08:56:37.274840 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" event={"ID":"bf7a3329-a04c-4b58-9364-b907c00cbe08","Type":"ContainerDied","Data":"4288f0a281b19c9f93fcb8b8d7e439e4c34597fa12a429e7eb6e155e31d87b19"} Mar 18 08:56:37.274948 master-0 kubenswrapper[6976]: I0318 08:56:37.274909 6976 scope.go:117] "RemoveContainer" containerID="b5f7cf693149b169e2ca2431c906635fd55e0044ca6a526820ae0cf9a719f2b3" Mar 18 08:56:37.277195 master-0 kubenswrapper[6976]: I0318 08:56:37.275840 6976 scope.go:117] "RemoveContainer" containerID="4288f0a281b19c9f93fcb8b8d7e439e4c34597fa12a429e7eb6e155e31d87b19" Mar 18 08:56:37.277195 master-0 kubenswrapper[6976]: E0318 08:56:37.276340 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-4cxfh_openshift-ingress-operator(bf7a3329-a04c-4b58-9364-b907c00cbe08)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" podUID="bf7a3329-a04c-4b58-9364-b907c00cbe08" Mar 18 08:56:37.283413 master-0 kubenswrapper[6976]: I0318 08:56:37.283350 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:37.283413 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:37.283413 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:37.283413 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:37.284494 master-0 kubenswrapper[6976]: I0318 08:56:37.284441 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:38.280024 master-0 kubenswrapper[6976]: I0318 08:56:38.279983 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:38.280024 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:38.280024 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:38.280024 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:38.280423 master-0 kubenswrapper[6976]: I0318 08:56:38.280032 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:38.281714 master-0 kubenswrapper[6976]: I0318 08:56:38.281689 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-4cxfh_bf7a3329-a04c-4b58-9364-b907c00cbe08/ingress-operator/2.log" Mar 18 08:56:39.280389 master-0 kubenswrapper[6976]: I0318 08:56:39.280282 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:39.280389 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:39.280389 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:39.280389 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:39.280389 master-0 kubenswrapper[6976]: I0318 08:56:39.280362 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:40.281113 master-0 kubenswrapper[6976]: I0318 08:56:40.281031 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:40.281113 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:40.281113 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:40.281113 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:40.282194 master-0 kubenswrapper[6976]: I0318 08:56:40.281156 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:41.281877 master-0 kubenswrapper[6976]: I0318 08:56:41.281731 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:41.281877 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:41.281877 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:41.281877 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:41.281877 master-0 kubenswrapper[6976]: I0318 08:56:41.281855 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:42.281251 master-0 kubenswrapper[6976]: I0318 08:56:42.281144 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:42.281251 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:42.281251 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:42.281251 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:42.281251 master-0 kubenswrapper[6976]: I0318 08:56:42.281235 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:43.282755 master-0 kubenswrapper[6976]: I0318 08:56:43.282593 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:43.282755 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:43.282755 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:43.282755 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:43.282755 master-0 kubenswrapper[6976]: I0318 08:56:43.282745 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:44.279867 master-0 kubenswrapper[6976]: I0318 08:56:44.279810 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:44.279867 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:44.279867 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:44.279867 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:44.280525 master-0 kubenswrapper[6976]: I0318 08:56:44.280485 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:45.280358 master-0 kubenswrapper[6976]: I0318 08:56:45.280287 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:45.280358 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:45.280358 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:45.280358 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:45.281471 master-0 kubenswrapper[6976]: I0318 08:56:45.280363 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:46.282195 master-0 kubenswrapper[6976]: I0318 08:56:46.280653 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:46.282195 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:46.282195 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:46.282195 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:46.282195 master-0 kubenswrapper[6976]: I0318 08:56:46.280740 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:47.280746 master-0 kubenswrapper[6976]: I0318 08:56:47.280638 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:47.280746 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:47.280746 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:47.280746 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:47.280746 master-0 kubenswrapper[6976]: I0318 08:56:47.280733 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:48.280539 master-0 kubenswrapper[6976]: I0318 08:56:48.280467 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:48.280539 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:48.280539 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:48.280539 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:48.281477 master-0 kubenswrapper[6976]: I0318 08:56:48.280636 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:49.281485 master-0 kubenswrapper[6976]: I0318 08:56:49.281413 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:49.281485 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:49.281485 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:49.281485 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:49.282515 master-0 kubenswrapper[6976]: I0318 08:56:49.281503 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:50.280705 master-0 kubenswrapper[6976]: I0318 08:56:50.280616 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:50.280705 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:50.280705 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:50.280705 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:50.281402 master-0 kubenswrapper[6976]: I0318 08:56:50.280752 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:51.280958 master-0 kubenswrapper[6976]: I0318 08:56:51.280849 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:51.280958 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:51.280958 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:51.280958 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:51.282382 master-0 kubenswrapper[6976]: I0318 08:56:51.280971 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:52.283464 master-0 kubenswrapper[6976]: I0318 08:56:52.283395 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:52.283464 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:52.283464 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:52.283464 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:52.284005 master-0 kubenswrapper[6976]: I0318 08:56:52.283494 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:52.598851 master-0 kubenswrapper[6976]: I0318 08:56:52.598787 6976 scope.go:117] "RemoveContainer" containerID="4288f0a281b19c9f93fcb8b8d7e439e4c34597fa12a429e7eb6e155e31d87b19" Mar 18 08:56:52.599204 master-0 kubenswrapper[6976]: E0318 08:56:52.599164 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-4cxfh_openshift-ingress-operator(bf7a3329-a04c-4b58-9364-b907c00cbe08)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" podUID="bf7a3329-a04c-4b58-9364-b907c00cbe08" Mar 18 08:56:53.281265 master-0 kubenswrapper[6976]: I0318 08:56:53.281184 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:53.281265 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:53.281265 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:53.281265 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:53.281624 master-0 kubenswrapper[6976]: I0318 08:56:53.281405 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:54.281245 master-0 kubenswrapper[6976]: I0318 08:56:54.281159 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:54.281245 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:54.281245 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:54.281245 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:54.282521 master-0 kubenswrapper[6976]: I0318 08:56:54.281275 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:55.280933 master-0 kubenswrapper[6976]: I0318 08:56:55.280838 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:55.280933 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:55.280933 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:55.280933 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:55.282216 master-0 kubenswrapper[6976]: I0318 08:56:55.280956 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:56.280504 master-0 kubenswrapper[6976]: I0318 08:56:56.280367 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:56.280504 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:56.280504 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:56.280504 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:56.281307 master-0 kubenswrapper[6976]: I0318 08:56:56.280502 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:57.281177 master-0 kubenswrapper[6976]: I0318 08:56:57.281091 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:57.281177 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:57.281177 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:57.281177 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:57.282772 master-0 kubenswrapper[6976]: I0318 08:56:57.281185 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:58.280914 master-0 kubenswrapper[6976]: I0318 08:56:58.280848 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:58.280914 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:58.280914 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:58.280914 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:58.280914 master-0 kubenswrapper[6976]: I0318 08:56:58.280918 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:56:59.284758 master-0 kubenswrapper[6976]: I0318 08:56:59.284681 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:56:59.284758 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:56:59.284758 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:56:59.284758 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:56:59.285433 master-0 kubenswrapper[6976]: I0318 08:56:59.284787 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:00.280417 master-0 kubenswrapper[6976]: I0318 08:57:00.280237 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:00.280417 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:00.280417 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:00.280417 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:00.280417 master-0 kubenswrapper[6976]: I0318 08:57:00.280370 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:01.281073 master-0 kubenswrapper[6976]: I0318 08:57:01.280963 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:01.281073 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:01.281073 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:01.281073 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:01.282083 master-0 kubenswrapper[6976]: I0318 08:57:01.281826 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:02.282990 master-0 kubenswrapper[6976]: I0318 08:57:02.282878 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:02.282990 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:02.282990 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:02.282990 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:02.282990 master-0 kubenswrapper[6976]: I0318 08:57:02.282981 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:03.279915 master-0 kubenswrapper[6976]: I0318 08:57:03.279831 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:03.279915 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:03.279915 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:03.279915 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:03.279915 master-0 kubenswrapper[6976]: I0318 08:57:03.279910 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:04.281486 master-0 kubenswrapper[6976]: I0318 08:57:04.281360 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:04.281486 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:04.281486 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:04.281486 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:04.281486 master-0 kubenswrapper[6976]: I0318 08:57:04.281473 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:05.281968 master-0 kubenswrapper[6976]: I0318 08:57:05.281887 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:05.281968 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:05.281968 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:05.281968 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:05.282741 master-0 kubenswrapper[6976]: I0318 08:57:05.282021 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:06.280676 master-0 kubenswrapper[6976]: I0318 08:57:06.280495 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:06.280676 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:06.280676 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:06.280676 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:06.280676 master-0 kubenswrapper[6976]: I0318 08:57:06.280644 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:06.598844 master-0 kubenswrapper[6976]: I0318 08:57:06.598744 6976 scope.go:117] "RemoveContainer" containerID="4288f0a281b19c9f93fcb8b8d7e439e4c34597fa12a429e7eb6e155e31d87b19" Mar 18 08:57:07.280832 master-0 kubenswrapper[6976]: I0318 08:57:07.280727 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:07.280832 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:07.280832 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:07.280832 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:07.281522 master-0 kubenswrapper[6976]: I0318 08:57:07.280865 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:07.513665 master-0 kubenswrapper[6976]: I0318 08:57:07.513587 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-4cxfh_bf7a3329-a04c-4b58-9364-b907c00cbe08/ingress-operator/2.log" Mar 18 08:57:07.514201 master-0 kubenswrapper[6976]: I0318 08:57:07.514125 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" event={"ID":"bf7a3329-a04c-4b58-9364-b907c00cbe08","Type":"ContainerStarted","Data":"736d8fa2eb3b5d4bf2fa6ebaf328e5d0f20bdff1b6da32fa492c4e843bc10226"} Mar 18 08:57:08.281145 master-0 kubenswrapper[6976]: I0318 08:57:08.281050 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:08.281145 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:08.281145 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:08.281145 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:08.282310 master-0 kubenswrapper[6976]: I0318 08:57:08.281167 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:09.281107 master-0 kubenswrapper[6976]: I0318 08:57:09.281023 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:09.281107 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:09.281107 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:09.281107 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:09.281959 master-0 kubenswrapper[6976]: I0318 08:57:09.281113 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:10.281696 master-0 kubenswrapper[6976]: I0318 08:57:10.281610 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:10.281696 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:10.281696 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:10.281696 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:10.282865 master-0 kubenswrapper[6976]: I0318 08:57:10.281708 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:11.281923 master-0 kubenswrapper[6976]: I0318 08:57:11.281836 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:11.281923 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:11.281923 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:11.281923 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:11.282882 master-0 kubenswrapper[6976]: I0318 08:57:11.281947 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:12.280220 master-0 kubenswrapper[6976]: I0318 08:57:12.280056 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:12.280220 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:12.280220 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:12.280220 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:12.280220 master-0 kubenswrapper[6976]: I0318 08:57:12.280128 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:13.280257 master-0 kubenswrapper[6976]: I0318 08:57:13.280169 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:13.280257 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:13.280257 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:13.280257 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:13.281883 master-0 kubenswrapper[6976]: I0318 08:57:13.280267 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:14.282682 master-0 kubenswrapper[6976]: I0318 08:57:14.282537 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:14.282682 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:14.282682 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:14.282682 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:14.282682 master-0 kubenswrapper[6976]: I0318 08:57:14.282672 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:15.280395 master-0 kubenswrapper[6976]: I0318 08:57:15.280285 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:15.280395 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:15.280395 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:15.280395 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:15.280884 master-0 kubenswrapper[6976]: I0318 08:57:15.280407 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:16.280937 master-0 kubenswrapper[6976]: I0318 08:57:16.280808 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:16.280937 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:16.280937 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:16.280937 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:16.282117 master-0 kubenswrapper[6976]: I0318 08:57:16.280939 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:17.282109 master-0 kubenswrapper[6976]: I0318 08:57:17.282042 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:17.282109 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:17.282109 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:17.282109 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:17.283206 master-0 kubenswrapper[6976]: I0318 08:57:17.282730 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:18.279557 master-0 kubenswrapper[6976]: I0318 08:57:18.279504 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:18.279557 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:18.279557 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:18.279557 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:18.280053 master-0 kubenswrapper[6976]: I0318 08:57:18.280017 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:19.280651 master-0 kubenswrapper[6976]: I0318 08:57:19.280502 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:19.280651 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:19.280651 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:19.280651 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:19.280651 master-0 kubenswrapper[6976]: I0318 08:57:19.280601 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:20.281344 master-0 kubenswrapper[6976]: I0318 08:57:20.281192 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:20.281344 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:20.281344 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:20.281344 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:20.281344 master-0 kubenswrapper[6976]: I0318 08:57:20.281335 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:21.284166 master-0 kubenswrapper[6976]: I0318 08:57:21.284108 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:21.284166 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:21.284166 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:21.284166 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:21.284776 master-0 kubenswrapper[6976]: I0318 08:57:21.284172 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:22.282025 master-0 kubenswrapper[6976]: I0318 08:57:22.281946 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:22.282025 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:22.282025 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:22.282025 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:22.282496 master-0 kubenswrapper[6976]: I0318 08:57:22.282032 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:23.279535 master-0 kubenswrapper[6976]: I0318 08:57:23.279473 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:23.279535 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:23.279535 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:23.279535 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:23.280209 master-0 kubenswrapper[6976]: I0318 08:57:23.279596 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:24.282020 master-0 kubenswrapper[6976]: I0318 08:57:24.281934 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:24.282020 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:24.282020 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:24.282020 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:24.282958 master-0 kubenswrapper[6976]: I0318 08:57:24.282034 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:25.279950 master-0 kubenswrapper[6976]: I0318 08:57:25.279881 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:25.279950 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:25.279950 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:25.279950 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:25.280357 master-0 kubenswrapper[6976]: I0318 08:57:25.280009 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:26.280395 master-0 kubenswrapper[6976]: I0318 08:57:26.280287 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:26.280395 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:26.280395 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:26.280395 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:26.280395 master-0 kubenswrapper[6976]: I0318 08:57:26.280382 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:27.278863 master-0 kubenswrapper[6976]: I0318 08:57:27.278813 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:27.278863 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:27.278863 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:27.278863 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:27.278863 master-0 kubenswrapper[6976]: I0318 08:57:27.278866 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:28.280513 master-0 kubenswrapper[6976]: I0318 08:57:28.280394 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:28.280513 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:28.280513 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:28.280513 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:28.280513 master-0 kubenswrapper[6976]: I0318 08:57:28.280497 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:29.279935 master-0 kubenswrapper[6976]: I0318 08:57:29.279867 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:29.279935 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:29.279935 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:29.279935 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:29.280846 master-0 kubenswrapper[6976]: I0318 08:57:29.280784 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:30.281866 master-0 kubenswrapper[6976]: I0318 08:57:30.281801 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:30.281866 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:30.281866 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:30.281866 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:30.283039 master-0 kubenswrapper[6976]: I0318 08:57:30.282993 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:31.281402 master-0 kubenswrapper[6976]: I0318 08:57:31.281319 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:31.281402 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:31.281402 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:31.281402 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:31.283621 master-0 kubenswrapper[6976]: I0318 08:57:31.281436 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:32.280931 master-0 kubenswrapper[6976]: I0318 08:57:32.280851 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:32.280931 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:32.280931 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:32.280931 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:32.280931 master-0 kubenswrapper[6976]: I0318 08:57:32.280916 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:33.281310 master-0 kubenswrapper[6976]: I0318 08:57:33.281221 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:33.281310 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:33.281310 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:33.281310 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:33.282350 master-0 kubenswrapper[6976]: I0318 08:57:33.281322 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:34.279847 master-0 kubenswrapper[6976]: I0318 08:57:34.279768 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:34.279847 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:34.279847 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:34.279847 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:34.280391 master-0 kubenswrapper[6976]: I0318 08:57:34.279876 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:35.281108 master-0 kubenswrapper[6976]: I0318 08:57:35.281014 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:35.281108 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:35.281108 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:35.281108 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:35.282040 master-0 kubenswrapper[6976]: I0318 08:57:35.281107 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:36.280344 master-0 kubenswrapper[6976]: I0318 08:57:36.280259 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:36.280344 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:36.280344 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:36.280344 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:36.280711 master-0 kubenswrapper[6976]: I0318 08:57:36.280369 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:36.786875 master-0 kubenswrapper[6976]: I0318 08:57:36.786821 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 18 08:57:36.787797 master-0 kubenswrapper[6976]: I0318 08:57:36.787770 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 08:57:36.790353 master-0 kubenswrapper[6976]: I0318 08:57:36.790317 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-x7h7h" Mar 18 08:57:36.790971 master-0 kubenswrapper[6976]: I0318 08:57:36.790932 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 18 08:57:36.805420 master-0 kubenswrapper[6976]: I0318 08:57:36.805352 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 18 08:57:36.901605 master-0 kubenswrapper[6976]: I0318 08:57:36.901494 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08e4bcfe-d6ca-4799-9431-682673fe7380-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"08e4bcfe-d6ca-4799-9431-682673fe7380\") " pod="openshift-etcd/installer-2-master-0" Mar 18 08:57:36.901796 master-0 kubenswrapper[6976]: I0318 08:57:36.901630 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08e4bcfe-d6ca-4799-9431-682673fe7380-kube-api-access\") pod \"installer-2-master-0\" (UID: \"08e4bcfe-d6ca-4799-9431-682673fe7380\") " pod="openshift-etcd/installer-2-master-0" Mar 18 08:57:36.901796 master-0 kubenswrapper[6976]: I0318 08:57:36.901701 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/08e4bcfe-d6ca-4799-9431-682673fe7380-var-lock\") pod \"installer-2-master-0\" (UID: \"08e4bcfe-d6ca-4799-9431-682673fe7380\") " pod="openshift-etcd/installer-2-master-0" Mar 18 08:57:37.003756 master-0 kubenswrapper[6976]: I0318 08:57:37.003678 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08e4bcfe-d6ca-4799-9431-682673fe7380-kube-api-access\") pod \"installer-2-master-0\" (UID: \"08e4bcfe-d6ca-4799-9431-682673fe7380\") " pod="openshift-etcd/installer-2-master-0" Mar 18 08:57:37.003960 master-0 kubenswrapper[6976]: I0318 08:57:37.003790 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/08e4bcfe-d6ca-4799-9431-682673fe7380-var-lock\") pod \"installer-2-master-0\" (UID: \"08e4bcfe-d6ca-4799-9431-682673fe7380\") " pod="openshift-etcd/installer-2-master-0" Mar 18 08:57:37.003960 master-0 kubenswrapper[6976]: I0318 08:57:37.003909 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/08e4bcfe-d6ca-4799-9431-682673fe7380-var-lock\") pod \"installer-2-master-0\" (UID: \"08e4bcfe-d6ca-4799-9431-682673fe7380\") " pod="openshift-etcd/installer-2-master-0" Mar 18 08:57:37.003960 master-0 kubenswrapper[6976]: I0318 08:57:37.003923 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08e4bcfe-d6ca-4799-9431-682673fe7380-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"08e4bcfe-d6ca-4799-9431-682673fe7380\") " pod="openshift-etcd/installer-2-master-0" Mar 18 08:57:37.004227 master-0 kubenswrapper[6976]: I0318 08:57:37.004165 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08e4bcfe-d6ca-4799-9431-682673fe7380-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"08e4bcfe-d6ca-4799-9431-682673fe7380\") " pod="openshift-etcd/installer-2-master-0" Mar 18 08:57:37.025874 master-0 kubenswrapper[6976]: I0318 08:57:37.025843 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08e4bcfe-d6ca-4799-9431-682673fe7380-kube-api-access\") pod \"installer-2-master-0\" (UID: \"08e4bcfe-d6ca-4799-9431-682673fe7380\") " pod="openshift-etcd/installer-2-master-0" Mar 18 08:57:37.126336 master-0 kubenswrapper[6976]: I0318 08:57:37.126138 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 08:57:37.283768 master-0 kubenswrapper[6976]: I0318 08:57:37.283725 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:37.283768 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:37.283768 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:37.283768 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:37.284054 master-0 kubenswrapper[6976]: I0318 08:57:37.283779 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:37.579807 master-0 kubenswrapper[6976]: I0318 08:57:37.579708 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 18 08:57:37.751017 master-0 kubenswrapper[6976]: I0318 08:57:37.750958 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"08e4bcfe-d6ca-4799-9431-682673fe7380","Type":"ContainerStarted","Data":"c2bd81df931b251c8d36514f9c347cc536878690477cb5bf137fec13c0335990"} Mar 18 08:57:38.279689 master-0 kubenswrapper[6976]: I0318 08:57:38.279601 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:38.279689 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:38.279689 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:38.279689 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:38.280369 master-0 kubenswrapper[6976]: I0318 08:57:38.279721 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:38.760430 master-0 kubenswrapper[6976]: I0318 08:57:38.759898 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"08e4bcfe-d6ca-4799-9431-682673fe7380","Type":"ContainerStarted","Data":"4fac56b4f00969e62c3497577a0e34f987859f3caade7772d5b6be1eaf234a7d"} Mar 18 08:57:38.787714 master-0 kubenswrapper[6976]: I0318 08:57:38.787625 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=2.787552378 podStartE2EDuration="2.787552378s" podCreationTimestamp="2026-03-18 08:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:57:38.776052865 +0000 UTC m=+558.361654480" watchObservedRunningTime="2026-03-18 08:57:38.787552378 +0000 UTC m=+558.373153983" Mar 18 08:57:39.281630 master-0 kubenswrapper[6976]: I0318 08:57:39.281553 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:39.281630 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:39.281630 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:39.281630 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:39.282227 master-0 kubenswrapper[6976]: I0318 08:57:39.281658 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:40.281728 master-0 kubenswrapper[6976]: I0318 08:57:40.281648 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:40.281728 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:40.281728 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:40.281728 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:40.282811 master-0 kubenswrapper[6976]: I0318 08:57:40.281743 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:41.280837 master-0 kubenswrapper[6976]: I0318 08:57:41.280740 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:41.280837 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:41.280837 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:41.280837 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:41.281478 master-0 kubenswrapper[6976]: I0318 08:57:41.280843 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:42.280707 master-0 kubenswrapper[6976]: I0318 08:57:42.280646 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:42.280707 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:42.280707 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:42.280707 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:42.281482 master-0 kubenswrapper[6976]: I0318 08:57:42.280705 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:43.279780 master-0 kubenswrapper[6976]: I0318 08:57:43.279689 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:43.279780 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:43.279780 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:43.279780 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:43.280347 master-0 kubenswrapper[6976]: I0318 08:57:43.279785 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:44.281292 master-0 kubenswrapper[6976]: I0318 08:57:44.281195 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:44.281292 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:44.281292 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:44.281292 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:44.282340 master-0 kubenswrapper[6976]: I0318 08:57:44.281332 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:45.280680 master-0 kubenswrapper[6976]: I0318 08:57:45.280613 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:45.280680 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:45.280680 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:45.280680 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:45.280991 master-0 kubenswrapper[6976]: I0318 08:57:45.280715 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:46.280958 master-0 kubenswrapper[6976]: I0318 08:57:46.280858 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:46.280958 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:46.280958 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:46.280958 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:46.282033 master-0 kubenswrapper[6976]: I0318 08:57:46.280965 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:47.281330 master-0 kubenswrapper[6976]: I0318 08:57:47.281251 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:47.281330 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:47.281330 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:47.281330 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:47.282660 master-0 kubenswrapper[6976]: I0318 08:57:47.281353 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:48.280645 master-0 kubenswrapper[6976]: I0318 08:57:48.280524 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:48.280645 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:48.280645 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:48.280645 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:48.281080 master-0 kubenswrapper[6976]: I0318 08:57:48.280649 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:49.281072 master-0 kubenswrapper[6976]: I0318 08:57:49.280948 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:49.281072 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:49.281072 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:49.281072 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:49.282290 master-0 kubenswrapper[6976]: I0318 08:57:49.281102 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:50.280944 master-0 kubenswrapper[6976]: I0318 08:57:50.280847 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:50.280944 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:50.280944 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:50.280944 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:50.281957 master-0 kubenswrapper[6976]: I0318 08:57:50.280963 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:51.281286 master-0 kubenswrapper[6976]: I0318 08:57:51.281228 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:51.281286 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:51.281286 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:51.281286 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:51.282534 master-0 kubenswrapper[6976]: I0318 08:57:51.282490 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:52.281317 master-0 kubenswrapper[6976]: I0318 08:57:52.281232 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:52.281317 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:52.281317 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:52.281317 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:52.282279 master-0 kubenswrapper[6976]: I0318 08:57:52.281320 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:53.279832 master-0 kubenswrapper[6976]: I0318 08:57:53.279756 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:53.279832 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:53.279832 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:53.279832 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:53.280470 master-0 kubenswrapper[6976]: I0318 08:57:53.279853 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:54.280202 master-0 kubenswrapper[6976]: I0318 08:57:54.280047 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:54.280202 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:54.280202 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:54.280202 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:54.280202 master-0 kubenswrapper[6976]: I0318 08:57:54.280154 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:55.281557 master-0 kubenswrapper[6976]: I0318 08:57:55.281471 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:55.281557 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:55.281557 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:55.281557 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:55.282616 master-0 kubenswrapper[6976]: I0318 08:57:55.281605 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:56.280623 master-0 kubenswrapper[6976]: I0318 08:57:56.280514 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:56.280623 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:56.280623 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:56.280623 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:56.280623 master-0 kubenswrapper[6976]: I0318 08:57:56.280610 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:57.280737 master-0 kubenswrapper[6976]: I0318 08:57:57.280674 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:57.280737 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:57.280737 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:57.280737 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:57.280737 master-0 kubenswrapper[6976]: I0318 08:57:57.280737 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:58.280661 master-0 kubenswrapper[6976]: I0318 08:57:58.280503 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:58.280661 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:58.280661 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:58.280661 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:58.281777 master-0 kubenswrapper[6976]: I0318 08:57:58.280667 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:57:59.279974 master-0 kubenswrapper[6976]: I0318 08:57:59.279871 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:57:59.279974 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:57:59.279974 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:57:59.279974 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:57:59.280789 master-0 kubenswrapper[6976]: I0318 08:57:59.279990 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:00.280223 master-0 kubenswrapper[6976]: I0318 08:58:00.280137 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:00.280223 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:00.280223 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:00.280223 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:00.281666 master-0 kubenswrapper[6976]: I0318 08:58:00.281259 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:00.577629 master-0 kubenswrapper[6976]: I0318 08:58:00.577461 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c945f8f5b-967lx"] Mar 18 08:58:00.577835 master-0 kubenswrapper[6976]: I0318 08:58:00.577742 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" podUID="59c421f2-2154-47eb-bf86-e5fe1b980d76" containerName="controller-manager" containerID="cri-o://72ab5355df063971a8723ac73ffe167a74111ca83ef1f5957c8201e93af2ece6" gracePeriod=30 Mar 18 08:58:00.597241 master-0 kubenswrapper[6976]: I0318 08:58:00.597187 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg"] Mar 18 08:58:00.601203 master-0 kubenswrapper[6976]: I0318 08:58:00.601150 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg" podUID="d7479b08-17be-4127-893b-c13007c8e4b7" containerName="route-controller-manager" containerID="cri-o://e7d529e7b664f8bc925f1171003f5b0bb292cf1e058d32784adb704c8243994d" gracePeriod=30 Mar 18 08:58:00.940759 master-0 kubenswrapper[6976]: I0318 08:58:00.940696 6976 generic.go:334] "Generic (PLEG): container finished" podID="d7479b08-17be-4127-893b-c13007c8e4b7" containerID="e7d529e7b664f8bc925f1171003f5b0bb292cf1e058d32784adb704c8243994d" exitCode=0 Mar 18 08:58:00.940956 master-0 kubenswrapper[6976]: I0318 08:58:00.940776 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg" event={"ID":"d7479b08-17be-4127-893b-c13007c8e4b7","Type":"ContainerDied","Data":"e7d529e7b664f8bc925f1171003f5b0bb292cf1e058d32784adb704c8243994d"} Mar 18 08:58:00.943472 master-0 kubenswrapper[6976]: I0318 08:58:00.943428 6976 generic.go:334] "Generic (PLEG): container finished" podID="59c421f2-2154-47eb-bf86-e5fe1b980d76" containerID="72ab5355df063971a8723ac73ffe167a74111ca83ef1f5957c8201e93af2ece6" exitCode=0 Mar 18 08:58:00.943560 master-0 kubenswrapper[6976]: I0318 08:58:00.943481 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" event={"ID":"59c421f2-2154-47eb-bf86-e5fe1b980d76","Type":"ContainerDied","Data":"72ab5355df063971a8723ac73ffe167a74111ca83ef1f5957c8201e93af2ece6"} Mar 18 08:58:00.943560 master-0 kubenswrapper[6976]: I0318 08:58:00.943538 6976 scope.go:117] "RemoveContainer" containerID="f7406136c7d1b5446d31fb2d477916274551fd8657f89454d9fad0aeccedb87c" Mar 18 08:58:01.056919 master-0 kubenswrapper[6976]: I0318 08:58:01.056887 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" Mar 18 08:58:01.062128 master-0 kubenswrapper[6976]: I0318 08:58:01.062091 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg" Mar 18 08:58:01.175360 master-0 kubenswrapper[6976]: I0318 08:58:01.175205 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59c421f2-2154-47eb-bf86-e5fe1b980d76-client-ca\") pod \"59c421f2-2154-47eb-bf86-e5fe1b980d76\" (UID: \"59c421f2-2154-47eb-bf86-e5fe1b980d76\") " Mar 18 08:58:01.175360 master-0 kubenswrapper[6976]: I0318 08:58:01.175299 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59c421f2-2154-47eb-bf86-e5fe1b980d76-proxy-ca-bundles\") pod \"59c421f2-2154-47eb-bf86-e5fe1b980d76\" (UID: \"59c421f2-2154-47eb-bf86-e5fe1b980d76\") " Mar 18 08:58:01.175360 master-0 kubenswrapper[6976]: I0318 08:58:01.175338 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvqwn\" (UniqueName: \"kubernetes.io/projected/59c421f2-2154-47eb-bf86-e5fe1b980d76-kube-api-access-kvqwn\") pod \"59c421f2-2154-47eb-bf86-e5fe1b980d76\" (UID: \"59c421f2-2154-47eb-bf86-e5fe1b980d76\") " Mar 18 08:58:01.175943 master-0 kubenswrapper[6976]: I0318 08:58:01.175392 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59c421f2-2154-47eb-bf86-e5fe1b980d76-serving-cert\") pod \"59c421f2-2154-47eb-bf86-e5fe1b980d76\" (UID: \"59c421f2-2154-47eb-bf86-e5fe1b980d76\") " Mar 18 08:58:01.175943 master-0 kubenswrapper[6976]: I0318 08:58:01.175507 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7479b08-17be-4127-893b-c13007c8e4b7-config\") pod \"d7479b08-17be-4127-893b-c13007c8e4b7\" (UID: \"d7479b08-17be-4127-893b-c13007c8e4b7\") " Mar 18 08:58:01.175943 master-0 kubenswrapper[6976]: I0318 08:58:01.175546 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rht5n\" (UniqueName: \"kubernetes.io/projected/d7479b08-17be-4127-893b-c13007c8e4b7-kube-api-access-rht5n\") pod \"d7479b08-17be-4127-893b-c13007c8e4b7\" (UID: \"d7479b08-17be-4127-893b-c13007c8e4b7\") " Mar 18 08:58:01.175943 master-0 kubenswrapper[6976]: I0318 08:58:01.175648 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7479b08-17be-4127-893b-c13007c8e4b7-serving-cert\") pod \"d7479b08-17be-4127-893b-c13007c8e4b7\" (UID: \"d7479b08-17be-4127-893b-c13007c8e4b7\") " Mar 18 08:58:01.175943 master-0 kubenswrapper[6976]: I0318 08:58:01.175695 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c421f2-2154-47eb-bf86-e5fe1b980d76-config\") pod \"59c421f2-2154-47eb-bf86-e5fe1b980d76\" (UID: \"59c421f2-2154-47eb-bf86-e5fe1b980d76\") " Mar 18 08:58:01.175943 master-0 kubenswrapper[6976]: I0318 08:58:01.175737 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7479b08-17be-4127-893b-c13007c8e4b7-client-ca\") pod \"d7479b08-17be-4127-893b-c13007c8e4b7\" (UID: \"d7479b08-17be-4127-893b-c13007c8e4b7\") " Mar 18 08:58:01.176470 master-0 kubenswrapper[6976]: I0318 08:58:01.175953 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59c421f2-2154-47eb-bf86-e5fe1b980d76-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "59c421f2-2154-47eb-bf86-e5fe1b980d76" (UID: "59c421f2-2154-47eb-bf86-e5fe1b980d76"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:58:01.176470 master-0 kubenswrapper[6976]: I0318 08:58:01.176334 6976 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59c421f2-2154-47eb-bf86-e5fe1b980d76-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:01.176630 master-0 kubenswrapper[6976]: I0318 08:58:01.176523 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7479b08-17be-4127-893b-c13007c8e4b7-config" (OuterVolumeSpecName: "config") pod "d7479b08-17be-4127-893b-c13007c8e4b7" (UID: "d7479b08-17be-4127-893b-c13007c8e4b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:58:01.176982 master-0 kubenswrapper[6976]: I0318 08:58:01.176931 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7479b08-17be-4127-893b-c13007c8e4b7-client-ca" (OuterVolumeSpecName: "client-ca") pod "d7479b08-17be-4127-893b-c13007c8e4b7" (UID: "d7479b08-17be-4127-893b-c13007c8e4b7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:58:01.177253 master-0 kubenswrapper[6976]: I0318 08:58:01.177085 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59c421f2-2154-47eb-bf86-e5fe1b980d76-config" (OuterVolumeSpecName: "config") pod "59c421f2-2154-47eb-bf86-e5fe1b980d76" (UID: "59c421f2-2154-47eb-bf86-e5fe1b980d76"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:58:01.177733 master-0 kubenswrapper[6976]: I0318 08:58:01.177690 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59c421f2-2154-47eb-bf86-e5fe1b980d76-client-ca" (OuterVolumeSpecName: "client-ca") pod "59c421f2-2154-47eb-bf86-e5fe1b980d76" (UID: "59c421f2-2154-47eb-bf86-e5fe1b980d76"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 08:58:01.178967 master-0 kubenswrapper[6976]: I0318 08:58:01.178906 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59c421f2-2154-47eb-bf86-e5fe1b980d76-kube-api-access-kvqwn" (OuterVolumeSpecName: "kube-api-access-kvqwn") pod "59c421f2-2154-47eb-bf86-e5fe1b980d76" (UID: "59c421f2-2154-47eb-bf86-e5fe1b980d76"). InnerVolumeSpecName "kube-api-access-kvqwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:58:01.179087 master-0 kubenswrapper[6976]: I0318 08:58:01.179033 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7479b08-17be-4127-893b-c13007c8e4b7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7479b08-17be-4127-893b-c13007c8e4b7" (UID: "d7479b08-17be-4127-893b-c13007c8e4b7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:58:01.180752 master-0 kubenswrapper[6976]: I0318 08:58:01.180700 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59c421f2-2154-47eb-bf86-e5fe1b980d76-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "59c421f2-2154-47eb-bf86-e5fe1b980d76" (UID: "59c421f2-2154-47eb-bf86-e5fe1b980d76"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 08:58:01.181023 master-0 kubenswrapper[6976]: I0318 08:58:01.180977 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7479b08-17be-4127-893b-c13007c8e4b7-kube-api-access-rht5n" (OuterVolumeSpecName: "kube-api-access-rht5n") pod "d7479b08-17be-4127-893b-c13007c8e4b7" (UID: "d7479b08-17be-4127-893b-c13007c8e4b7"). InnerVolumeSpecName "kube-api-access-rht5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:58:01.277723 master-0 kubenswrapper[6976]: I0318 08:58:01.277630 6976 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rht5n\" (UniqueName: \"kubernetes.io/projected/d7479b08-17be-4127-893b-c13007c8e4b7-kube-api-access-rht5n\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:01.277723 master-0 kubenswrapper[6976]: I0318 08:58:01.277697 6976 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7479b08-17be-4127-893b-c13007c8e4b7-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:01.277723 master-0 kubenswrapper[6976]: I0318 08:58:01.277719 6976 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59c421f2-2154-47eb-bf86-e5fe1b980d76-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:01.277723 master-0 kubenswrapper[6976]: I0318 08:58:01.277737 6976 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7479b08-17be-4127-893b-c13007c8e4b7-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:01.278290 master-0 kubenswrapper[6976]: I0318 08:58:01.277755 6976 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59c421f2-2154-47eb-bf86-e5fe1b980d76-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:01.278290 master-0 kubenswrapper[6976]: I0318 08:58:01.277773 6976 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvqwn\" (UniqueName: \"kubernetes.io/projected/59c421f2-2154-47eb-bf86-e5fe1b980d76-kube-api-access-kvqwn\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:01.278290 master-0 kubenswrapper[6976]: I0318 08:58:01.277791 6976 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59c421f2-2154-47eb-bf86-e5fe1b980d76-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:01.278290 master-0 kubenswrapper[6976]: I0318 08:58:01.277809 6976 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7479b08-17be-4127-893b-c13007c8e4b7-config\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:01.282285 master-0 kubenswrapper[6976]: I0318 08:58:01.282199 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:01.282285 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:01.282285 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:01.282285 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:01.283187 master-0 kubenswrapper[6976]: I0318 08:58:01.282312 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:01.838071 master-0 kubenswrapper[6976]: I0318 08:58:01.837994 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7d954fcfb-gpddv"] Mar 18 08:58:01.838527 master-0 kubenswrapper[6976]: E0318 08:58:01.838500 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7479b08-17be-4127-893b-c13007c8e4b7" containerName="route-controller-manager" Mar 18 08:58:01.838591 master-0 kubenswrapper[6976]: I0318 08:58:01.838536 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7479b08-17be-4127-893b-c13007c8e4b7" containerName="route-controller-manager" Mar 18 08:58:01.838648 master-0 kubenswrapper[6976]: E0318 08:58:01.838625 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c421f2-2154-47eb-bf86-e5fe1b980d76" containerName="controller-manager" Mar 18 08:58:01.838693 master-0 kubenswrapper[6976]: I0318 08:58:01.838656 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c421f2-2154-47eb-bf86-e5fe1b980d76" containerName="controller-manager" Mar 18 08:58:01.838736 master-0 kubenswrapper[6976]: E0318 08:58:01.838693 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c421f2-2154-47eb-bf86-e5fe1b980d76" containerName="controller-manager" Mar 18 08:58:01.838736 master-0 kubenswrapper[6976]: I0318 08:58:01.838728 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c421f2-2154-47eb-bf86-e5fe1b980d76" containerName="controller-manager" Mar 18 08:58:01.838962 master-0 kubenswrapper[6976]: I0318 08:58:01.838928 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7479b08-17be-4127-893b-c13007c8e4b7" containerName="route-controller-manager" Mar 18 08:58:01.839011 master-0 kubenswrapper[6976]: I0318 08:58:01.838973 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="59c421f2-2154-47eb-bf86-e5fe1b980d76" containerName="controller-manager" Mar 18 08:58:01.839011 master-0 kubenswrapper[6976]: I0318 08:58:01.838996 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="59c421f2-2154-47eb-bf86-e5fe1b980d76" containerName="controller-manager" Mar 18 08:58:01.839734 master-0 kubenswrapper[6976]: I0318 08:58:01.839694 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 08:58:01.843833 master-0 kubenswrapper[6976]: I0318 08:58:01.843772 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg"] Mar 18 08:58:01.844974 master-0 kubenswrapper[6976]: I0318 08:58:01.844933 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 08:58:01.857073 master-0 kubenswrapper[6976]: I0318 08:58:01.856965 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d954fcfb-gpddv"] Mar 18 08:58:01.870589 master-0 kubenswrapper[6976]: I0318 08:58:01.870497 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg"] Mar 18 08:58:01.953487 master-0 kubenswrapper[6976]: I0318 08:58:01.953355 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg" Mar 18 08:58:01.953487 master-0 kubenswrapper[6976]: I0318 08:58:01.953394 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg" event={"ID":"d7479b08-17be-4127-893b-c13007c8e4b7","Type":"ContainerDied","Data":"1e051b7faa69e903ae0f651dcaa043ed1f5ae5f07bccc322860c3fdfaf058d32"} Mar 18 08:58:01.953487 master-0 kubenswrapper[6976]: I0318 08:58:01.953479 6976 scope.go:117] "RemoveContainer" containerID="e7d529e7b664f8bc925f1171003f5b0bb292cf1e058d32784adb704c8243994d" Mar 18 08:58:01.956257 master-0 kubenswrapper[6976]: I0318 08:58:01.956204 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" event={"ID":"59c421f2-2154-47eb-bf86-e5fe1b980d76","Type":"ContainerDied","Data":"81ead4c8f220d1963f29e356d7dcbc6fa146175546302c4e747d85a34e03f0cd"} Mar 18 08:58:01.956623 master-0 kubenswrapper[6976]: I0318 08:58:01.956539 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c945f8f5b-967lx" Mar 18 08:58:01.975850 master-0 kubenswrapper[6976]: I0318 08:58:01.975792 6976 scope.go:117] "RemoveContainer" containerID="72ab5355df063971a8723ac73ffe167a74111ca83ef1f5957c8201e93af2ece6" Mar 18 08:58:01.988374 master-0 kubenswrapper[6976]: I0318 08:58:01.988324 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-client-ca\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 08:58:01.988374 master-0 kubenswrapper[6976]: I0318 08:58:01.988378 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjv4l\" (UniqueName: \"kubernetes.io/projected/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-kube-api-access-xjv4l\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 08:58:01.988648 master-0 kubenswrapper[6976]: I0318 08:58:01.988402 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b7ac7ef-060f-45d2-8988-006d45402e00-client-ca\") pod \"route-controller-manager-7dbcb47f86-ptccg\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 08:58:01.988648 master-0 kubenswrapper[6976]: I0318 08:58:01.988465 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkx4s\" (UniqueName: \"kubernetes.io/projected/7b7ac7ef-060f-45d2-8988-006d45402e00-kube-api-access-qkx4s\") pod \"route-controller-manager-7dbcb47f86-ptccg\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 08:58:01.988648 master-0 kubenswrapper[6976]: I0318 08:58:01.988491 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-proxy-ca-bundles\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 08:58:01.988648 master-0 kubenswrapper[6976]: I0318 08:58:01.988514 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-serving-cert\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 08:58:01.988911 master-0 kubenswrapper[6976]: I0318 08:58:01.988642 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b7ac7ef-060f-45d2-8988-006d45402e00-serving-cert\") pod \"route-controller-manager-7dbcb47f86-ptccg\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 08:58:01.988911 master-0 kubenswrapper[6976]: I0318 08:58:01.988759 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b7ac7ef-060f-45d2-8988-006d45402e00-config\") pod \"route-controller-manager-7dbcb47f86-ptccg\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 08:58:01.988911 master-0 kubenswrapper[6976]: I0318 08:58:01.988867 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-config\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 08:58:02.007486 master-0 kubenswrapper[6976]: I0318 08:58:02.007432 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg"] Mar 18 08:58:02.017767 master-0 kubenswrapper[6976]: I0318 08:58:02.017683 6976 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d945cb54-px8bg"] Mar 18 08:58:02.037588 master-0 kubenswrapper[6976]: I0318 08:58:02.037523 6976 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c945f8f5b-967lx"] Mar 18 08:58:02.056582 master-0 kubenswrapper[6976]: I0318 08:58:02.056507 6976 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c945f8f5b-967lx"] Mar 18 08:58:02.090605 master-0 kubenswrapper[6976]: I0318 08:58:02.090539 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkx4s\" (UniqueName: \"kubernetes.io/projected/7b7ac7ef-060f-45d2-8988-006d45402e00-kube-api-access-qkx4s\") pod \"route-controller-manager-7dbcb47f86-ptccg\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 08:58:02.090800 master-0 kubenswrapper[6976]: I0318 08:58:02.090619 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-proxy-ca-bundles\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 08:58:02.090800 master-0 kubenswrapper[6976]: I0318 08:58:02.090658 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-serving-cert\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 08:58:02.090897 master-0 kubenswrapper[6976]: I0318 08:58:02.090845 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b7ac7ef-060f-45d2-8988-006d45402e00-serving-cert\") pod \"route-controller-manager-7dbcb47f86-ptccg\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 08:58:02.090897 master-0 kubenswrapper[6976]: I0318 08:58:02.090885 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b7ac7ef-060f-45d2-8988-006d45402e00-config\") pod \"route-controller-manager-7dbcb47f86-ptccg\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 08:58:02.090986 master-0 kubenswrapper[6976]: I0318 08:58:02.090929 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-config\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 08:58:02.090986 master-0 kubenswrapper[6976]: I0318 08:58:02.090969 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-client-ca\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 08:58:02.091067 master-0 kubenswrapper[6976]: I0318 08:58:02.090992 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjv4l\" (UniqueName: \"kubernetes.io/projected/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-kube-api-access-xjv4l\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 08:58:02.091067 master-0 kubenswrapper[6976]: I0318 08:58:02.091008 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b7ac7ef-060f-45d2-8988-006d45402e00-client-ca\") pod \"route-controller-manager-7dbcb47f86-ptccg\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 08:58:02.092883 master-0 kubenswrapper[6976]: I0318 08:58:02.092800 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-config\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 08:58:02.092883 master-0 kubenswrapper[6976]: I0318 08:58:02.092844 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b7ac7ef-060f-45d2-8988-006d45402e00-client-ca\") pod \"route-controller-manager-7dbcb47f86-ptccg\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 08:58:02.093410 master-0 kubenswrapper[6976]: I0318 08:58:02.093162 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-client-ca\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 08:58:02.093410 master-0 kubenswrapper[6976]: I0318 08:58:02.093266 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-proxy-ca-bundles\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 08:58:02.099024 master-0 kubenswrapper[6976]: I0318 08:58:02.098984 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b7ac7ef-060f-45d2-8988-006d45402e00-serving-cert\") pod \"route-controller-manager-7dbcb47f86-ptccg\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 08:58:02.099024 master-0 kubenswrapper[6976]: I0318 08:58:02.099015 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-serving-cert\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 08:58:02.099946 master-0 kubenswrapper[6976]: I0318 08:58:02.099916 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b7ac7ef-060f-45d2-8988-006d45402e00-config\") pod \"route-controller-manager-7dbcb47f86-ptccg\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 08:58:02.107779 master-0 kubenswrapper[6976]: I0318 08:58:02.107740 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkx4s\" (UniqueName: \"kubernetes.io/projected/7b7ac7ef-060f-45d2-8988-006d45402e00-kube-api-access-qkx4s\") pod \"route-controller-manager-7dbcb47f86-ptccg\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 08:58:02.110106 master-0 kubenswrapper[6976]: I0318 08:58:02.110068 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjv4l\" (UniqueName: \"kubernetes.io/projected/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-kube-api-access-xjv4l\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 08:58:02.185073 master-0 kubenswrapper[6976]: I0318 08:58:02.185023 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 08:58:02.207530 master-0 kubenswrapper[6976]: I0318 08:58:02.207477 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 08:58:02.280605 master-0 kubenswrapper[6976]: I0318 08:58:02.280541 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:02.280605 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:02.280605 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:02.280605 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:02.280864 master-0 kubenswrapper[6976]: I0318 08:58:02.280626 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:02.609812 master-0 kubenswrapper[6976]: I0318 08:58:02.609777 6976 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59c421f2-2154-47eb-bf86-e5fe1b980d76" path="/var/lib/kubelet/pods/59c421f2-2154-47eb-bf86-e5fe1b980d76/volumes" Mar 18 08:58:02.610684 master-0 kubenswrapper[6976]: I0318 08:58:02.610633 6976 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7479b08-17be-4127-893b-c13007c8e4b7" path="/var/lib/kubelet/pods/d7479b08-17be-4127-893b-c13007c8e4b7/volumes" Mar 18 08:58:02.611131 master-0 kubenswrapper[6976]: I0318 08:58:02.611103 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg"] Mar 18 08:58:02.614354 master-0 kubenswrapper[6976]: W0318 08:58:02.614294 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b7ac7ef_060f_45d2_8988_006d45402e00.slice/crio-78e813f78215ce3e16f2984fa206660096f7ce143773316d81ca9ea51d037b30 WatchSource:0}: Error finding container 78e813f78215ce3e16f2984fa206660096f7ce143773316d81ca9ea51d037b30: Status 404 returned error can't find the container with id 78e813f78215ce3e16f2984fa206660096f7ce143773316d81ca9ea51d037b30 Mar 18 08:58:02.669742 master-0 kubenswrapper[6976]: I0318 08:58:02.669685 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d954fcfb-gpddv"] Mar 18 08:58:02.966087 master-0 kubenswrapper[6976]: I0318 08:58:02.965673 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" event={"ID":"6e869b45-8ca6-485f-8b6f-b2fad3b02efe","Type":"ContainerStarted","Data":"c6f50cc1d4d03038974b1dda9ac09a73d320c83ee3b7473d607c5ae6d14d9d74"} Mar 18 08:58:02.966087 master-0 kubenswrapper[6976]: I0318 08:58:02.965728 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" event={"ID":"6e869b45-8ca6-485f-8b6f-b2fad3b02efe","Type":"ContainerStarted","Data":"34190ff24c5d64d3f04ee73c9371b2fe699e4dc756931f93643f7e454d205294"} Mar 18 08:58:02.966087 master-0 kubenswrapper[6976]: I0318 08:58:02.966035 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 08:58:02.968582 master-0 kubenswrapper[6976]: I0318 08:58:02.968519 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" event={"ID":"7b7ac7ef-060f-45d2-8988-006d45402e00","Type":"ContainerStarted","Data":"cad52e5c76ccd56c40f70728b53ed3629a65d0787f739d643129ab2545080603"} Mar 18 08:58:02.968582 master-0 kubenswrapper[6976]: I0318 08:58:02.968549 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" event={"ID":"7b7ac7ef-060f-45d2-8988-006d45402e00","Type":"ContainerStarted","Data":"78e813f78215ce3e16f2984fa206660096f7ce143773316d81ca9ea51d037b30"} Mar 18 08:58:02.969058 master-0 kubenswrapper[6976]: I0318 08:58:02.969025 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 08:58:02.970893 master-0 kubenswrapper[6976]: I0318 08:58:02.970864 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 08:58:03.003760 master-0 kubenswrapper[6976]: I0318 08:58:03.003662 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" podStartSLOduration=3.003645495 podStartE2EDuration="3.003645495s" podCreationTimestamp="2026-03-18 08:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:58:03.001468189 +0000 UTC m=+582.587069794" watchObservedRunningTime="2026-03-18 08:58:03.003645495 +0000 UTC m=+582.589247100" Mar 18 08:58:03.275152 master-0 kubenswrapper[6976]: I0318 08:58:03.275024 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 08:58:03.279351 master-0 kubenswrapper[6976]: I0318 08:58:03.279302 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:03.279351 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:03.279351 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:03.279351 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:03.279351 master-0 kubenswrapper[6976]: I0318 08:58:03.279351 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:03.300454 master-0 kubenswrapper[6976]: I0318 08:58:03.300374 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" podStartSLOduration=3.300354041 podStartE2EDuration="3.300354041s" podCreationTimestamp="2026-03-18 08:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:58:03.113531107 +0000 UTC m=+582.699132702" watchObservedRunningTime="2026-03-18 08:58:03.300354041 +0000 UTC m=+582.885955636" Mar 18 08:58:04.280333 master-0 kubenswrapper[6976]: I0318 08:58:04.280267 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:04.280333 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:04.280333 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:04.280333 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:04.281023 master-0 kubenswrapper[6976]: I0318 08:58:04.280351 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:05.023111 master-0 kubenswrapper[6976]: I0318 08:58:05.023016 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 08:58:05.024244 master-0 kubenswrapper[6976]: I0318 08:58:05.024170 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 08:58:05.026316 master-0 kubenswrapper[6976]: I0318 08:58:05.026239 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-6mb4h" Mar 18 08:58:05.026413 master-0 kubenswrapper[6976]: I0318 08:58:05.026363 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 08:58:05.042100 master-0 kubenswrapper[6976]: I0318 08:58:05.042054 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 08:58:05.142507 master-0 kubenswrapper[6976]: I0318 08:58:05.142415 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93298cb2-d669-49ea-92be-8891f07ab1c5-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"93298cb2-d669-49ea-92be-8891f07ab1c5\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 08:58:05.142507 master-0 kubenswrapper[6976]: I0318 08:58:05.142466 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/93298cb2-d669-49ea-92be-8891f07ab1c5-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"93298cb2-d669-49ea-92be-8891f07ab1c5\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 08:58:05.142952 master-0 kubenswrapper[6976]: I0318 08:58:05.142684 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/93298cb2-d669-49ea-92be-8891f07ab1c5-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"93298cb2-d669-49ea-92be-8891f07ab1c5\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 08:58:05.244114 master-0 kubenswrapper[6976]: I0318 08:58:05.244021 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93298cb2-d669-49ea-92be-8891f07ab1c5-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"93298cb2-d669-49ea-92be-8891f07ab1c5\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 08:58:05.244114 master-0 kubenswrapper[6976]: I0318 08:58:05.244081 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/93298cb2-d669-49ea-92be-8891f07ab1c5-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"93298cb2-d669-49ea-92be-8891f07ab1c5\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 08:58:05.244114 master-0 kubenswrapper[6976]: I0318 08:58:05.244127 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/93298cb2-d669-49ea-92be-8891f07ab1c5-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"93298cb2-d669-49ea-92be-8891f07ab1c5\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 08:58:05.244535 master-0 kubenswrapper[6976]: I0318 08:58:05.244232 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/93298cb2-d669-49ea-92be-8891f07ab1c5-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"93298cb2-d669-49ea-92be-8891f07ab1c5\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 08:58:05.244535 master-0 kubenswrapper[6976]: I0318 08:58:05.244279 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/93298cb2-d669-49ea-92be-8891f07ab1c5-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"93298cb2-d669-49ea-92be-8891f07ab1c5\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 08:58:05.272501 master-0 kubenswrapper[6976]: I0318 08:58:05.272417 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93298cb2-d669-49ea-92be-8891f07ab1c5-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"93298cb2-d669-49ea-92be-8891f07ab1c5\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 08:58:05.280692 master-0 kubenswrapper[6976]: I0318 08:58:05.280507 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:05.280692 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:05.280692 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:05.280692 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:05.281690 master-0 kubenswrapper[6976]: I0318 08:58:05.280644 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:05.353554 master-0 kubenswrapper[6976]: I0318 08:58:05.353471 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 08:58:05.489143 master-0 kubenswrapper[6976]: I0318 08:58:05.489067 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 18 08:58:05.490434 master-0 kubenswrapper[6976]: I0318 08:58:05.490389 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:58:05.493711 master-0 kubenswrapper[6976]: I0318 08:58:05.493643 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-jw7t8" Mar 18 08:58:05.497993 master-0 kubenswrapper[6976]: I0318 08:58:05.494777 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 18 08:58:05.497993 master-0 kubenswrapper[6976]: I0318 08:58:05.496880 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 18 08:58:05.549255 master-0 kubenswrapper[6976]: I0318 08:58:05.549194 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ca7b84e-0aff-4526-948a-03492712ff8f-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"5ca7b84e-0aff-4526-948a-03492712ff8f\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:58:05.549463 master-0 kubenswrapper[6976]: I0318 08:58:05.549312 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ca7b84e-0aff-4526-948a-03492712ff8f-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5ca7b84e-0aff-4526-948a-03492712ff8f\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:58:05.549463 master-0 kubenswrapper[6976]: I0318 08:58:05.549357 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ca7b84e-0aff-4526-948a-03492712ff8f-var-lock\") pod \"installer-4-master-0\" (UID: \"5ca7b84e-0aff-4526-948a-03492712ff8f\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:58:05.651034 master-0 kubenswrapper[6976]: I0318 08:58:05.650865 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ca7b84e-0aff-4526-948a-03492712ff8f-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"5ca7b84e-0aff-4526-948a-03492712ff8f\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:58:05.651034 master-0 kubenswrapper[6976]: I0318 08:58:05.650924 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ca7b84e-0aff-4526-948a-03492712ff8f-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5ca7b84e-0aff-4526-948a-03492712ff8f\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:58:05.651358 master-0 kubenswrapper[6976]: I0318 08:58:05.651128 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ca7b84e-0aff-4526-948a-03492712ff8f-var-lock\") pod \"installer-4-master-0\" (UID: \"5ca7b84e-0aff-4526-948a-03492712ff8f\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:58:05.651554 master-0 kubenswrapper[6976]: I0318 08:58:05.651495 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ca7b84e-0aff-4526-948a-03492712ff8f-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"5ca7b84e-0aff-4526-948a-03492712ff8f\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:58:05.651691 master-0 kubenswrapper[6976]: I0318 08:58:05.651646 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ca7b84e-0aff-4526-948a-03492712ff8f-var-lock\") pod \"installer-4-master-0\" (UID: \"5ca7b84e-0aff-4526-948a-03492712ff8f\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:58:05.668437 master-0 kubenswrapper[6976]: I0318 08:58:05.668385 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ca7b84e-0aff-4526-948a-03492712ff8f-kube-api-access\") pod \"installer-4-master-0\" (UID: \"5ca7b84e-0aff-4526-948a-03492712ff8f\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:58:05.817760 master-0 kubenswrapper[6976]: I0318 08:58:05.816798 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 18 08:58:05.829787 master-0 kubenswrapper[6976]: I0318 08:58:05.828342 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:58:05.990505 master-0 kubenswrapper[6976]: I0318 08:58:05.990459 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"93298cb2-d669-49ea-92be-8891f07ab1c5","Type":"ContainerStarted","Data":"cbf3348e82bffe8480be217acc63e599c4842d6df59ff32a187560845a00e908"} Mar 18 08:58:06.218821 master-0 kubenswrapper[6976]: I0318 08:58:06.218651 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 18 08:58:06.229691 master-0 kubenswrapper[6976]: W0318 08:58:06.229647 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5ca7b84e_0aff_4526_948a_03492712ff8f.slice/crio-b023e92f57d6773ebf2508c0ed8826a189d16751fde08444987f384bb9579093 WatchSource:0}: Error finding container b023e92f57d6773ebf2508c0ed8826a189d16751fde08444987f384bb9579093: Status 404 returned error can't find the container with id b023e92f57d6773ebf2508c0ed8826a189d16751fde08444987f384bb9579093 Mar 18 08:58:06.281251 master-0 kubenswrapper[6976]: I0318 08:58:06.280533 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:06.281251 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:06.281251 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:06.281251 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:06.281251 master-0 kubenswrapper[6976]: I0318 08:58:06.280599 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:07.001013 master-0 kubenswrapper[6976]: I0318 08:58:07.000915 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"5ca7b84e-0aff-4526-948a-03492712ff8f","Type":"ContainerStarted","Data":"20d4a123aac7008bd6bae1aff8407f2615166875d8bf7999da7a207bfc33acbf"} Mar 18 08:58:07.001013 master-0 kubenswrapper[6976]: I0318 08:58:07.001012 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"5ca7b84e-0aff-4526-948a-03492712ff8f","Type":"ContainerStarted","Data":"b023e92f57d6773ebf2508c0ed8826a189d16751fde08444987f384bb9579093"} Mar 18 08:58:07.003444 master-0 kubenswrapper[6976]: I0318 08:58:07.003393 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"93298cb2-d669-49ea-92be-8891f07ab1c5","Type":"ContainerStarted","Data":"c0f26fec4f81ffb39062787c37d928b9983f9d92c91a3bd728d23e41e8ceecc3"} Mar 18 08:58:07.043040 master-0 kubenswrapper[6976]: I0318 08:58:07.042927 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=2.042903016 podStartE2EDuration="2.042903016s" podCreationTimestamp="2026-03-18 08:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:58:07.036987326 +0000 UTC m=+586.622588961" watchObservedRunningTime="2026-03-18 08:58:07.042903016 +0000 UTC m=+586.628504621" Mar 18 08:58:07.068147 master-0 kubenswrapper[6976]: I0318 08:58:07.068018 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podStartSLOduration=2.067984446 podStartE2EDuration="2.067984446s" podCreationTimestamp="2026-03-18 08:58:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 08:58:07.06186662 +0000 UTC m=+586.647468225" watchObservedRunningTime="2026-03-18 08:58:07.067984446 +0000 UTC m=+586.653586081" Mar 18 08:58:07.279039 master-0 kubenswrapper[6976]: I0318 08:58:07.278890 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:07.279039 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:07.279039 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:07.279039 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:07.279500 master-0 kubenswrapper[6976]: I0318 08:58:07.279459 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:08.280177 master-0 kubenswrapper[6976]: I0318 08:58:08.280090 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:08.280177 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:08.280177 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:08.280177 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:08.280177 master-0 kubenswrapper[6976]: I0318 08:58:08.280172 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:08.921271 master-0 kubenswrapper[6976]: I0318 08:58:08.921168 6976 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 18 08:58:08.921977 master-0 kubenswrapper[6976]: I0318 08:58:08.921920 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" containerID="cri-o://e170620a09f67f7dd5644ef0ed06bf71397ac82649b983c533838793eeba5434" gracePeriod=30 Mar 18 08:58:08.921977 master-0 kubenswrapper[6976]: I0318 08:58:08.921939 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" containerID="cri-o://1951546c85592fe98e5dbb82d2390a079377b906f6ce17c831e35dd6a20e3c5a" gracePeriod=30 Mar 18 08:58:08.922116 master-0 kubenswrapper[6976]: I0318 08:58:08.922031 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" containerID="cri-o://9bb40497785c5f7d8d5301fe57c4b67d01320ad9570331c3ae357b52e29702f0" gracePeriod=30 Mar 18 08:58:08.922162 master-0 kubenswrapper[6976]: I0318 08:58:08.922094 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" containerID="cri-o://2db73d7101a43abc812f123a338de4314d42908c424cba5f3dfda66103668e89" gracePeriod=30 Mar 18 08:58:08.922218 master-0 kubenswrapper[6976]: I0318 08:58:08.922125 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" containerID="cri-o://dd65c9ff55caaa591c9ce309cbf2e71c0d904c09319b714ab36cd668cef65506" gracePeriod=30 Mar 18 08:58:08.923090 master-0 kubenswrapper[6976]: I0318 08:58:08.922704 6976 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 18 08:58:08.923090 master-0 kubenswrapper[6976]: E0318 08:58:08.923030 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-resources-copy" Mar 18 08:58:08.923090 master-0 kubenswrapper[6976]: I0318 08:58:08.923051 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-resources-copy" Mar 18 08:58:08.923090 master-0 kubenswrapper[6976]: E0318 08:58:08.923078 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-ensure-env-vars" Mar 18 08:58:08.923090 master-0 kubenswrapper[6976]: I0318 08:58:08.923091 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-ensure-env-vars" Mar 18 08:58:08.923307 master-0 kubenswrapper[6976]: E0318 08:58:08.923113 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 18 08:58:08.923307 master-0 kubenswrapper[6976]: I0318 08:58:08.923125 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 18 08:58:08.923307 master-0 kubenswrapper[6976]: E0318 08:58:08.923146 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 18 08:58:08.923307 master-0 kubenswrapper[6976]: I0318 08:58:08.923160 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 18 08:58:08.923307 master-0 kubenswrapper[6976]: E0318 08:58:08.923177 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 18 08:58:08.923307 master-0 kubenswrapper[6976]: I0318 08:58:08.923188 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 18 08:58:08.923307 master-0 kubenswrapper[6976]: E0318 08:58:08.923205 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 18 08:58:08.923307 master-0 kubenswrapper[6976]: I0318 08:58:08.923217 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 18 08:58:08.923307 master-0 kubenswrapper[6976]: E0318 08:58:08.923235 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 18 08:58:08.923307 master-0 kubenswrapper[6976]: I0318 08:58:08.923247 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 18 08:58:08.923307 master-0 kubenswrapper[6976]: E0318 08:58:08.923273 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="setup" Mar 18 08:58:08.923307 master-0 kubenswrapper[6976]: I0318 08:58:08.923285 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="setup" Mar 18 08:58:08.923856 master-0 kubenswrapper[6976]: I0318 08:58:08.923485 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 18 08:58:08.923856 master-0 kubenswrapper[6976]: I0318 08:58:08.923509 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 18 08:58:08.923856 master-0 kubenswrapper[6976]: I0318 08:58:08.923536 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 18 08:58:08.923856 master-0 kubenswrapper[6976]: I0318 08:58:08.923557 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 18 08:58:08.923856 master-0 kubenswrapper[6976]: I0318 08:58:08.923606 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 18 08:58:08.999343 master-0 kubenswrapper[6976]: I0318 08:58:08.999254 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:58:08.999485 master-0 kubenswrapper[6976]: I0318 08:58:08.999353 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:58:08.999485 master-0 kubenswrapper[6976]: I0318 08:58:08.999410 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:58:08.999485 master-0 kubenswrapper[6976]: I0318 08:58:08.999477 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:58:08.999809 master-0 kubenswrapper[6976]: I0318 08:58:08.999554 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:58:08.999809 master-0 kubenswrapper[6976]: I0318 08:58:08.999717 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:58:09.100900 master-0 kubenswrapper[6976]: I0318 08:58:09.100834 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:58:09.100900 master-0 kubenswrapper[6976]: I0318 08:58:09.100900 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:58:09.101117 master-0 kubenswrapper[6976]: I0318 08:58:09.101031 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:58:09.101188 master-0 kubenswrapper[6976]: I0318 08:58:09.101144 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:58:09.101237 master-0 kubenswrapper[6976]: I0318 08:58:09.101196 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:58:09.101330 master-0 kubenswrapper[6976]: I0318 08:58:09.101156 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:58:09.101611 master-0 kubenswrapper[6976]: I0318 08:58:09.101537 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:58:09.101701 master-0 kubenswrapper[6976]: I0318 08:58:09.101663 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:58:09.101754 master-0 kubenswrapper[6976]: I0318 08:58:09.101733 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:58:09.101798 master-0 kubenswrapper[6976]: I0318 08:58:09.101778 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:58:09.101862 master-0 kubenswrapper[6976]: I0318 08:58:09.101831 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:58:09.101953 master-0 kubenswrapper[6976]: I0318 08:58:09.101919 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 08:58:09.280625 master-0 kubenswrapper[6976]: I0318 08:58:09.280524 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:09.280625 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:09.280625 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:09.280625 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:09.281299 master-0 kubenswrapper[6976]: I0318 08:58:09.280664 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:10.029357 master-0 kubenswrapper[6976]: I0318 08:58:10.029269 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 18 08:58:10.030836 master-0 kubenswrapper[6976]: I0318 08:58:10.030790 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 18 08:58:10.033872 master-0 kubenswrapper[6976]: I0318 08:58:10.033794 6976 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="1951546c85592fe98e5dbb82d2390a079377b906f6ce17c831e35dd6a20e3c5a" exitCode=2 Mar 18 08:58:10.033872 master-0 kubenswrapper[6976]: I0318 08:58:10.033833 6976 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="dd65c9ff55caaa591c9ce309cbf2e71c0d904c09319b714ab36cd668cef65506" exitCode=0 Mar 18 08:58:10.033872 master-0 kubenswrapper[6976]: I0318 08:58:10.033842 6976 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="9bb40497785c5f7d8d5301fe57c4b67d01320ad9570331c3ae357b52e29702f0" exitCode=2 Mar 18 08:58:10.281206 master-0 kubenswrapper[6976]: I0318 08:58:10.281005 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:10.281206 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:10.281206 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:10.281206 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:10.281206 master-0 kubenswrapper[6976]: I0318 08:58:10.281135 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:10.574624 master-0 kubenswrapper[6976]: I0318 08:58:10.574403 6976 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.32.10:9980/readyz\": dial tcp 192.168.32.10:9980: connect: connection refused" start-of-body= Mar 18 08:58:10.574624 master-0 kubenswrapper[6976]: I0318 08:58:10.574485 6976 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" probeResult="failure" output="Get \"https://192.168.32.10:9980/readyz\": dial tcp 192.168.32.10:9980: connect: connection refused" Mar 18 08:58:11.281286 master-0 kubenswrapper[6976]: I0318 08:58:11.281192 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:11.281286 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:11.281286 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:11.281286 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:11.282334 master-0 kubenswrapper[6976]: I0318 08:58:11.281287 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:12.280162 master-0 kubenswrapper[6976]: I0318 08:58:12.280046 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:12.280162 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:12.280162 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:12.280162 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:12.280626 master-0 kubenswrapper[6976]: I0318 08:58:12.280165 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:13.280747 master-0 kubenswrapper[6976]: I0318 08:58:13.280619 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:13.280747 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:13.280747 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:13.280747 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:13.280747 master-0 kubenswrapper[6976]: I0318 08:58:13.280728 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:14.281068 master-0 kubenswrapper[6976]: I0318 08:58:14.280946 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:14.281068 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:14.281068 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:14.281068 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:14.281068 master-0 kubenswrapper[6976]: I0318 08:58:14.281038 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:15.280987 master-0 kubenswrapper[6976]: I0318 08:58:15.280889 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:15.280987 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:15.280987 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:15.280987 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:15.280987 master-0 kubenswrapper[6976]: I0318 08:58:15.280988 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:16.281380 master-0 kubenswrapper[6976]: I0318 08:58:16.281244 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:16.281380 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:16.281380 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:16.281380 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:16.281380 master-0 kubenswrapper[6976]: I0318 08:58:16.281368 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:17.281126 master-0 kubenswrapper[6976]: I0318 08:58:17.281030 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:17.281126 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:17.281126 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:17.281126 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:17.282480 master-0 kubenswrapper[6976]: I0318 08:58:17.281136 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:18.279919 master-0 kubenswrapper[6976]: I0318 08:58:18.279788 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:18.279919 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:18.279919 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:18.279919 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:18.280509 master-0 kubenswrapper[6976]: I0318 08:58:18.279903 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:19.281241 master-0 kubenswrapper[6976]: I0318 08:58:19.281100 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:19.281241 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:19.281241 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:19.281241 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:19.282195 master-0 kubenswrapper[6976]: I0318 08:58:19.281257 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:20.280385 master-0 kubenswrapper[6976]: I0318 08:58:20.280334 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:20.280385 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:20.280385 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:20.280385 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:20.281224 master-0 kubenswrapper[6976]: I0318 08:58:20.281179 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:21.280951 master-0 kubenswrapper[6976]: I0318 08:58:21.280796 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:21.280951 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:21.280951 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:21.280951 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:21.282393 master-0 kubenswrapper[6976]: I0318 08:58:21.280961 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:21.556436 master-0 kubenswrapper[6976]: E0318 08:58:21.556201 6976 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:58:22.280668 master-0 kubenswrapper[6976]: I0318 08:58:22.280541 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:22.280668 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:22.280668 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:22.280668 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:22.281658 master-0 kubenswrapper[6976]: I0318 08:58:22.280694 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:23.143441 master-0 kubenswrapper[6976]: I0318 08:58:23.143352 6976 generic.go:334] "Generic (PLEG): container finished" podID="08e4bcfe-d6ca-4799-9431-682673fe7380" containerID="4fac56b4f00969e62c3497577a0e34f987859f3caade7772d5b6be1eaf234a7d" exitCode=0 Mar 18 08:58:23.143441 master-0 kubenswrapper[6976]: I0318 08:58:23.143425 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"08e4bcfe-d6ca-4799-9431-682673fe7380","Type":"ContainerDied","Data":"4fac56b4f00969e62c3497577a0e34f987859f3caade7772d5b6be1eaf234a7d"} Mar 18 08:58:23.280064 master-0 kubenswrapper[6976]: I0318 08:58:23.279954 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:23.280064 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:23.280064 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:23.280064 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:23.280492 master-0 kubenswrapper[6976]: I0318 08:58:23.280107 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:24.154706 master-0 kubenswrapper[6976]: I0318 08:58:24.154534 6976 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="5d936f20024a27505a56975f151b7c3bf1da50cb1c5e184e3c0c3840e435fca8" exitCode=1 Mar 18 08:58:24.154706 master-0 kubenswrapper[6976]: I0318 08:58:24.154653 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"5d936f20024a27505a56975f151b7c3bf1da50cb1c5e184e3c0c3840e435fca8"} Mar 18 08:58:24.155735 master-0 kubenswrapper[6976]: I0318 08:58:24.154741 6976 scope.go:117] "RemoveContainer" containerID="fc3bba74c1c5dfc4469c628e1ccd99032fb59aaf6362379db3f1337bbf0219a6" Mar 18 08:58:24.156538 master-0 kubenswrapper[6976]: I0318 08:58:24.155852 6976 scope.go:117] "RemoveContainer" containerID="5d936f20024a27505a56975f151b7c3bf1da50cb1c5e184e3c0c3840e435fca8" Mar 18 08:58:24.156538 master-0 kubenswrapper[6976]: E0318 08:58:24.156272 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 08:58:24.281253 master-0 kubenswrapper[6976]: I0318 08:58:24.281135 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:24.281253 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:24.281253 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:24.281253 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:24.281253 master-0 kubenswrapper[6976]: I0318 08:58:24.281246 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:24.572338 master-0 kubenswrapper[6976]: I0318 08:58:24.572226 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 08:58:24.747189 master-0 kubenswrapper[6976]: I0318 08:58:24.747104 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08e4bcfe-d6ca-4799-9431-682673fe7380-kubelet-dir\") pod \"08e4bcfe-d6ca-4799-9431-682673fe7380\" (UID: \"08e4bcfe-d6ca-4799-9431-682673fe7380\") " Mar 18 08:58:24.747189 master-0 kubenswrapper[6976]: I0318 08:58:24.747188 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08e4bcfe-d6ca-4799-9431-682673fe7380-kube-api-access\") pod \"08e4bcfe-d6ca-4799-9431-682673fe7380\" (UID: \"08e4bcfe-d6ca-4799-9431-682673fe7380\") " Mar 18 08:58:24.747514 master-0 kubenswrapper[6976]: I0318 08:58:24.747291 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/08e4bcfe-d6ca-4799-9431-682673fe7380-var-lock\") pod \"08e4bcfe-d6ca-4799-9431-682673fe7380\" (UID: \"08e4bcfe-d6ca-4799-9431-682673fe7380\") " Mar 18 08:58:24.747746 master-0 kubenswrapper[6976]: I0318 08:58:24.747708 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08e4bcfe-d6ca-4799-9431-682673fe7380-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "08e4bcfe-d6ca-4799-9431-682673fe7380" (UID: "08e4bcfe-d6ca-4799-9431-682673fe7380"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:24.747965 master-0 kubenswrapper[6976]: I0318 08:58:24.747880 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/08e4bcfe-d6ca-4799-9431-682673fe7380-var-lock" (OuterVolumeSpecName: "var-lock") pod "08e4bcfe-d6ca-4799-9431-682673fe7380" (UID: "08e4bcfe-d6ca-4799-9431-682673fe7380"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:24.751970 master-0 kubenswrapper[6976]: I0318 08:58:24.751851 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08e4bcfe-d6ca-4799-9431-682673fe7380-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "08e4bcfe-d6ca-4799-9431-682673fe7380" (UID: "08e4bcfe-d6ca-4799-9431-682673fe7380"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:58:24.849612 master-0 kubenswrapper[6976]: I0318 08:58:24.849508 6976 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08e4bcfe-d6ca-4799-9431-682673fe7380-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:24.849612 master-0 kubenswrapper[6976]: I0318 08:58:24.849627 6976 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/08e4bcfe-d6ca-4799-9431-682673fe7380-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:24.850110 master-0 kubenswrapper[6976]: I0318 08:58:24.849664 6976 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/08e4bcfe-d6ca-4799-9431-682673fe7380-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:25.162202 master-0 kubenswrapper[6976]: I0318 08:58:25.162080 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"08e4bcfe-d6ca-4799-9431-682673fe7380","Type":"ContainerDied","Data":"c2bd81df931b251c8d36514f9c347cc536878690477cb5bf137fec13c0335990"} Mar 18 08:58:25.162202 master-0 kubenswrapper[6976]: I0318 08:58:25.162129 6976 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2bd81df931b251c8d36514f9c347cc536878690477cb5bf137fec13c0335990" Mar 18 08:58:25.162887 master-0 kubenswrapper[6976]: I0318 08:58:25.162861 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 08:58:25.164293 master-0 kubenswrapper[6976]: I0318 08:58:25.164261 6976 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="6d079fc624d85c119aa8e55be99b13072fefe4d01c61b51b27950cfbdef8830f" exitCode=1 Mar 18 08:58:25.164378 master-0 kubenswrapper[6976]: I0318 08:58:25.164304 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerDied","Data":"6d079fc624d85c119aa8e55be99b13072fefe4d01c61b51b27950cfbdef8830f"} Mar 18 08:58:25.164378 master-0 kubenswrapper[6976]: I0318 08:58:25.164329 6976 scope.go:117] "RemoveContainer" containerID="0e74fe65579e23426bc0e51944122434e2b88b2a4dcfe52117fc70980e194f0d" Mar 18 08:58:25.165205 master-0 kubenswrapper[6976]: I0318 08:58:25.165155 6976 scope.go:117] "RemoveContainer" containerID="6d079fc624d85c119aa8e55be99b13072fefe4d01c61b51b27950cfbdef8830f" Mar 18 08:58:25.167363 master-0 kubenswrapper[6976]: E0318 08:58:25.165533 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=bootstrap-kube-scheduler-master-0_kube-system(c83737980b9ee109184b1d78e942cf36)\"" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="c83737980b9ee109184b1d78e942cf36" Mar 18 08:58:25.280070 master-0 kubenswrapper[6976]: I0318 08:58:25.280032 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:25.280070 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:25.280070 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:25.280070 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:25.280402 master-0 kubenswrapper[6976]: I0318 08:58:25.280085 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:26.280755 master-0 kubenswrapper[6976]: I0318 08:58:26.280668 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:26.280755 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:26.280755 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:26.280755 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:26.281856 master-0 kubenswrapper[6976]: I0318 08:58:26.280775 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:27.280907 master-0 kubenswrapper[6976]: I0318 08:58:27.280776 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:27.280907 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:27.280907 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:27.280907 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:27.282280 master-0 kubenswrapper[6976]: I0318 08:58:27.280916 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:28.281331 master-0 kubenswrapper[6976]: I0318 08:58:28.281211 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:58:28.281331 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:58:28.281331 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:58:28.281331 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:58:28.282667 master-0 kubenswrapper[6976]: I0318 08:58:28.281335 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:58:28.282667 master-0 kubenswrapper[6976]: I0318 08:58:28.281431 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:58:28.282667 master-0 kubenswrapper[6976]: I0318 08:58:28.282556 6976 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"8822d8d1cd61ab70d73bc23715778ff88e202eedade5838abd00a7ee1f05085e"} pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" containerMessage="Container router failed startup probe, will be restarted" Mar 18 08:58:28.282968 master-0 kubenswrapper[6976]: I0318 08:58:28.282672 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" containerID="cri-o://8822d8d1cd61ab70d73bc23715778ff88e202eedade5838abd00a7ee1f05085e" gracePeriod=3600 Mar 18 08:58:29.308487 master-0 kubenswrapper[6976]: E0318 08:58:29.308350 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:58:19Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:58:19Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:58:19Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T08:58:19Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:58:31.556936 master-0 kubenswrapper[6976]: E0318 08:58:31.556815 6976 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:58:32.425028 master-0 kubenswrapper[6976]: I0318 08:58:32.424932 6976 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:58:32.425935 master-0 kubenswrapper[6976]: I0318 08:58:32.425880 6976 scope.go:117] "RemoveContainer" containerID="5d936f20024a27505a56975f151b7c3bf1da50cb1c5e184e3c0c3840e435fca8" Mar 18 08:58:32.426415 master-0 kubenswrapper[6976]: E0318 08:58:32.426361 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 08:58:33.565465 master-0 kubenswrapper[6976]: I0318 08:58:33.565371 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:58:33.566600 master-0 kubenswrapper[6976]: I0318 08:58:33.566115 6976 scope.go:117] "RemoveContainer" containerID="5d936f20024a27505a56975f151b7c3bf1da50cb1c5e184e3c0c3840e435fca8" Mar 18 08:58:33.566600 master-0 kubenswrapper[6976]: E0318 08:58:33.566464 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 08:58:33.784102 master-0 kubenswrapper[6976]: I0318 08:58:33.784019 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:58:34.248493 master-0 kubenswrapper[6976]: I0318 08:58:34.248418 6976 scope.go:117] "RemoveContainer" containerID="5d936f20024a27505a56975f151b7c3bf1da50cb1c5e184e3c0c3840e435fca8" Mar 18 08:58:34.248940 master-0 kubenswrapper[6976]: E0318 08:58:34.248864 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 08:58:35.598926 master-0 kubenswrapper[6976]: I0318 08:58:35.598866 6976 scope.go:117] "RemoveContainer" containerID="6d079fc624d85c119aa8e55be99b13072fefe4d01c61b51b27950cfbdef8830f" Mar 18 08:58:36.278423 master-0 kubenswrapper[6976]: I0318 08:58:36.278363 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"7c2aae6fa53257e6d8c7e1c783c29a93037db597eccbd9c6d53d330e1c671296"} Mar 18 08:58:39.302501 master-0 kubenswrapper[6976]: I0318 08:58:39.302423 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 18 08:58:39.304262 master-0 kubenswrapper[6976]: I0318 08:58:39.304174 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 18 08:58:39.305785 master-0 kubenswrapper[6976]: I0318 08:58:39.305728 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 18 08:58:39.306839 master-0 kubenswrapper[6976]: I0318 08:58:39.306759 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 18 08:58:39.308630 master-0 kubenswrapper[6976]: I0318 08:58:39.308534 6976 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="2db73d7101a43abc812f123a338de4314d42908c424cba5f3dfda66103668e89" exitCode=137 Mar 18 08:58:39.308955 master-0 kubenswrapper[6976]: E0318 08:58:39.308624 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 08:58:39.308955 master-0 kubenswrapper[6976]: I0318 08:58:39.308638 6976 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="e170620a09f67f7dd5644ef0ed06bf71397ac82649b983c533838793eeba5434" exitCode=137 Mar 18 08:58:39.519203 master-0 kubenswrapper[6976]: I0318 08:58:39.519114 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 18 08:58:39.520611 master-0 kubenswrapper[6976]: I0318 08:58:39.520509 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 18 08:58:39.521862 master-0 kubenswrapper[6976]: I0318 08:58:39.521791 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 18 08:58:39.522470 master-0 kubenswrapper[6976]: I0318 08:58:39.522407 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 18 08:58:39.524357 master-0 kubenswrapper[6976]: I0318 08:58:39.524280 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 08:58:39.684145 master-0 kubenswrapper[6976]: I0318 08:58:39.683999 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 08:58:39.684145 master-0 kubenswrapper[6976]: I0318 08:58:39.684107 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 08:58:39.684385 master-0 kubenswrapper[6976]: I0318 08:58:39.684129 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir" (OuterVolumeSpecName: "log-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:39.684385 master-0 kubenswrapper[6976]: I0318 08:58:39.684202 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 08:58:39.684385 master-0 kubenswrapper[6976]: I0318 08:58:39.684219 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir" (OuterVolumeSpecName: "data-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:39.684385 master-0 kubenswrapper[6976]: I0318 08:58:39.684251 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 08:58:39.684385 master-0 kubenswrapper[6976]: I0318 08:58:39.684355 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:39.684606 master-0 kubenswrapper[6976]: I0318 08:58:39.684369 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 08:58:39.684606 master-0 kubenswrapper[6976]: I0318 08:58:39.684420 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:39.684606 master-0 kubenswrapper[6976]: I0318 08:58:39.684437 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:39.684606 master-0 kubenswrapper[6976]: I0318 08:58:39.684481 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 18 08:58:39.684759 master-0 kubenswrapper[6976]: I0318 08:58:39.684601 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:39.685042 master-0 kubenswrapper[6976]: I0318 08:58:39.685005 6976 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:39.685092 master-0 kubenswrapper[6976]: I0318 08:58:39.685039 6976 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:39.685092 master-0 kubenswrapper[6976]: I0318 08:58:39.685062 6976 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:39.685092 master-0 kubenswrapper[6976]: I0318 08:58:39.685081 6976 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:39.685203 master-0 kubenswrapper[6976]: I0318 08:58:39.685100 6976 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:39.685203 master-0 kubenswrapper[6976]: I0318 08:58:39.685120 6976 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:40.319698 master-0 kubenswrapper[6976]: I0318 08:58:40.319636 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 18 08:58:40.322249 master-0 kubenswrapper[6976]: I0318 08:58:40.322195 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 18 08:58:40.323787 master-0 kubenswrapper[6976]: I0318 08:58:40.323748 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 18 08:58:40.324774 master-0 kubenswrapper[6976]: I0318 08:58:40.324729 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 18 08:58:40.327337 master-0 kubenswrapper[6976]: I0318 08:58:40.327279 6976 scope.go:117] "RemoveContainer" containerID="1951546c85592fe98e5dbb82d2390a079377b906f6ce17c831e35dd6a20e3c5a" Mar 18 08:58:40.327538 master-0 kubenswrapper[6976]: I0318 08:58:40.327495 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 08:58:40.346778 master-0 kubenswrapper[6976]: I0318 08:58:40.346641 6976 scope.go:117] "RemoveContainer" containerID="dd65c9ff55caaa591c9ce309cbf2e71c0d904c09319b714ab36cd668cef65506" Mar 18 08:58:40.381285 master-0 kubenswrapper[6976]: I0318 08:58:40.381189 6976 scope.go:117] "RemoveContainer" containerID="9bb40497785c5f7d8d5301fe57c4b67d01320ad9570331c3ae357b52e29702f0" Mar 18 08:58:40.400678 master-0 kubenswrapper[6976]: I0318 08:58:40.400533 6976 scope.go:117] "RemoveContainer" containerID="2db73d7101a43abc812f123a338de4314d42908c424cba5f3dfda66103668e89" Mar 18 08:58:40.422951 master-0 kubenswrapper[6976]: I0318 08:58:40.422928 6976 scope.go:117] "RemoveContainer" containerID="e170620a09f67f7dd5644ef0ed06bf71397ac82649b983c533838793eeba5434" Mar 18 08:58:40.444042 master-0 kubenswrapper[6976]: I0318 08:58:40.443995 6976 scope.go:117] "RemoveContainer" containerID="512a999778aeba262c615ce98f4b7e30d2e5304b6c496908178b7d3a73d7fb2e" Mar 18 08:58:40.465117 master-0 kubenswrapper[6976]: I0318 08:58:40.465072 6976 scope.go:117] "RemoveContainer" containerID="8e22cea355c21809ea7ad1e7a2be9dfff724fa66b0b6eb753d91edc0a5a5e930" Mar 18 08:58:40.482229 master-0 kubenswrapper[6976]: I0318 08:58:40.482201 6976 scope.go:117] "RemoveContainer" containerID="05713cda00e01f4fa6b33e36c9677b903f2b97a2f623ad2f25f79ec8b0a1264c" Mar 18 08:58:40.617116 master-0 kubenswrapper[6976]: I0318 08:58:40.617053 6976 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24b4ed170d527099878cb5fdd508a2fb" path="/var/lib/kubelet/pods/24b4ed170d527099878cb5fdd508a2fb/volumes" Mar 18 08:58:41.557731 master-0 kubenswrapper[6976]: E0318 08:58:41.557609 6976 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:58:42.951935 master-0 kubenswrapper[6976]: E0318 08:58:42.951631 6976 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189de3caeb267f3e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:24b4ed170d527099878cb5fdd508a2fb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Killing,Message:Stopping container etcd-rev,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:58:08.921911102 +0000 UTC m=+588.507512717,LastTimestamp:2026-03-18 08:58:08.921911102 +0000 UTC m=+588.507512717,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:58:48.598980 master-0 kubenswrapper[6976]: I0318 08:58:48.598928 6976 scope.go:117] "RemoveContainer" containerID="5d936f20024a27505a56975f151b7c3bf1da50cb1c5e184e3c0c3840e435fca8" Mar 18 08:58:48.600373 master-0 kubenswrapper[6976]: E0318 08:58:48.600337 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 08:58:49.309729 master-0 kubenswrapper[6976]: E0318 08:58:49.309651 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:58:50.598112 master-0 kubenswrapper[6976]: I0318 08:58:50.598030 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 08:58:50.634334 master-0 kubenswrapper[6976]: I0318 08:58:50.634261 6976 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="a9391117-4261-4eba-b3f6-0d77562a8375" Mar 18 08:58:50.634334 master-0 kubenswrapper[6976]: I0318 08:58:50.634299 6976 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="a9391117-4261-4eba-b3f6-0d77562a8375" Mar 18 08:58:51.424362 master-0 kubenswrapper[6976]: I0318 08:58:51.424288 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_5ca7b84e-0aff-4526-948a-03492712ff8f/installer/0.log" Mar 18 08:58:51.424674 master-0 kubenswrapper[6976]: I0318 08:58:51.424389 6976 generic.go:334] "Generic (PLEG): container finished" podID="5ca7b84e-0aff-4526-948a-03492712ff8f" containerID="20d4a123aac7008bd6bae1aff8407f2615166875d8bf7999da7a207bfc33acbf" exitCode=1 Mar 18 08:58:51.424674 master-0 kubenswrapper[6976]: I0318 08:58:51.424524 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"5ca7b84e-0aff-4526-948a-03492712ff8f","Type":"ContainerDied","Data":"20d4a123aac7008bd6bae1aff8407f2615166875d8bf7999da7a207bfc33acbf"} Mar 18 08:58:51.428587 master-0 kubenswrapper[6976]: I0318 08:58:51.428491 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_93298cb2-d669-49ea-92be-8891f07ab1c5/installer/0.log" Mar 18 08:58:51.428760 master-0 kubenswrapper[6976]: I0318 08:58:51.428606 6976 generic.go:334] "Generic (PLEG): container finished" podID="93298cb2-d669-49ea-92be-8891f07ab1c5" containerID="c0f26fec4f81ffb39062787c37d928b9983f9d92c91a3bd728d23e41e8ceecc3" exitCode=1 Mar 18 08:58:51.428760 master-0 kubenswrapper[6976]: I0318 08:58:51.428654 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"93298cb2-d669-49ea-92be-8891f07ab1c5","Type":"ContainerDied","Data":"c0f26fec4f81ffb39062787c37d928b9983f9d92c91a3bd728d23e41e8ceecc3"} Mar 18 08:58:51.558266 master-0 kubenswrapper[6976]: E0318 08:58:51.558163 6976 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:58:52.877511 master-0 kubenswrapper[6976]: I0318 08:58:52.877438 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_93298cb2-d669-49ea-92be-8891f07ab1c5/installer/0.log" Mar 18 08:58:52.878184 master-0 kubenswrapper[6976]: I0318 08:58:52.877541 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 08:58:52.884689 master-0 kubenswrapper[6976]: I0318 08:58:52.884642 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_5ca7b84e-0aff-4526-948a-03492712ff8f/installer/0.log" Mar 18 08:58:52.884844 master-0 kubenswrapper[6976]: I0318 08:58:52.884729 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:58:53.044964 master-0 kubenswrapper[6976]: I0318 08:58:53.044880 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ca7b84e-0aff-4526-948a-03492712ff8f-kube-api-access\") pod \"5ca7b84e-0aff-4526-948a-03492712ff8f\" (UID: \"5ca7b84e-0aff-4526-948a-03492712ff8f\") " Mar 18 08:58:53.045712 master-0 kubenswrapper[6976]: I0318 08:58:53.045002 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ca7b84e-0aff-4526-948a-03492712ff8f-var-lock\") pod \"5ca7b84e-0aff-4526-948a-03492712ff8f\" (UID: \"5ca7b84e-0aff-4526-948a-03492712ff8f\") " Mar 18 08:58:53.045712 master-0 kubenswrapper[6976]: I0318 08:58:53.045112 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ca7b84e-0aff-4526-948a-03492712ff8f-var-lock" (OuterVolumeSpecName: "var-lock") pod "5ca7b84e-0aff-4526-948a-03492712ff8f" (UID: "5ca7b84e-0aff-4526-948a-03492712ff8f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:53.045712 master-0 kubenswrapper[6976]: I0318 08:58:53.045210 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/93298cb2-d669-49ea-92be-8891f07ab1c5-var-lock\") pod \"93298cb2-d669-49ea-92be-8891f07ab1c5\" (UID: \"93298cb2-d669-49ea-92be-8891f07ab1c5\") " Mar 18 08:58:53.045712 master-0 kubenswrapper[6976]: I0318 08:58:53.045300 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93298cb2-d669-49ea-92be-8891f07ab1c5-var-lock" (OuterVolumeSpecName: "var-lock") pod "93298cb2-d669-49ea-92be-8891f07ab1c5" (UID: "93298cb2-d669-49ea-92be-8891f07ab1c5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:53.045712 master-0 kubenswrapper[6976]: I0318 08:58:53.045370 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93298cb2-d669-49ea-92be-8891f07ab1c5-kube-api-access\") pod \"93298cb2-d669-49ea-92be-8891f07ab1c5\" (UID: \"93298cb2-d669-49ea-92be-8891f07ab1c5\") " Mar 18 08:58:53.046145 master-0 kubenswrapper[6976]: I0318 08:58:53.046090 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/93298cb2-d669-49ea-92be-8891f07ab1c5-kubelet-dir\") pod \"93298cb2-d669-49ea-92be-8891f07ab1c5\" (UID: \"93298cb2-d669-49ea-92be-8891f07ab1c5\") " Mar 18 08:58:53.046233 master-0 kubenswrapper[6976]: I0318 08:58:53.046203 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ca7b84e-0aff-4526-948a-03492712ff8f-kubelet-dir\") pod \"5ca7b84e-0aff-4526-948a-03492712ff8f\" (UID: \"5ca7b84e-0aff-4526-948a-03492712ff8f\") " Mar 18 08:58:53.046337 master-0 kubenswrapper[6976]: I0318 08:58:53.046206 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93298cb2-d669-49ea-92be-8891f07ab1c5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "93298cb2-d669-49ea-92be-8891f07ab1c5" (UID: "93298cb2-d669-49ea-92be-8891f07ab1c5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:53.046419 master-0 kubenswrapper[6976]: I0318 08:58:53.046303 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ca7b84e-0aff-4526-948a-03492712ff8f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5ca7b84e-0aff-4526-948a-03492712ff8f" (UID: "5ca7b84e-0aff-4526-948a-03492712ff8f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 08:58:53.046931 master-0 kubenswrapper[6976]: I0318 08:58:53.046861 6976 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/93298cb2-d669-49ea-92be-8891f07ab1c5-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:53.046931 master-0 kubenswrapper[6976]: I0318 08:58:53.046918 6976 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5ca7b84e-0aff-4526-948a-03492712ff8f-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:53.047121 master-0 kubenswrapper[6976]: I0318 08:58:53.046946 6976 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5ca7b84e-0aff-4526-948a-03492712ff8f-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:53.047121 master-0 kubenswrapper[6976]: I0318 08:58:53.046972 6976 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/93298cb2-d669-49ea-92be-8891f07ab1c5-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:53.048961 master-0 kubenswrapper[6976]: I0318 08:58:53.048909 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ca7b84e-0aff-4526-948a-03492712ff8f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5ca7b84e-0aff-4526-948a-03492712ff8f" (UID: "5ca7b84e-0aff-4526-948a-03492712ff8f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:58:53.049780 master-0 kubenswrapper[6976]: I0318 08:58:53.049722 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93298cb2-d669-49ea-92be-8891f07ab1c5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "93298cb2-d669-49ea-92be-8891f07ab1c5" (UID: "93298cb2-d669-49ea-92be-8891f07ab1c5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 08:58:53.148702 master-0 kubenswrapper[6976]: I0318 08:58:53.148501 6976 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5ca7b84e-0aff-4526-948a-03492712ff8f-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:53.148702 master-0 kubenswrapper[6976]: I0318 08:58:53.148597 6976 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/93298cb2-d669-49ea-92be-8891f07ab1c5-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 08:58:53.447891 master-0 kubenswrapper[6976]: I0318 08:58:53.447717 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_93298cb2-d669-49ea-92be-8891f07ab1c5/installer/0.log" Mar 18 08:58:53.447891 master-0 kubenswrapper[6976]: I0318 08:58:53.447863 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"93298cb2-d669-49ea-92be-8891f07ab1c5","Type":"ContainerDied","Data":"cbf3348e82bffe8480be217acc63e599c4842d6df59ff32a187560845a00e908"} Mar 18 08:58:53.448256 master-0 kubenswrapper[6976]: I0318 08:58:53.447902 6976 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbf3348e82bffe8480be217acc63e599c4842d6df59ff32a187560845a00e908" Mar 18 08:58:53.448256 master-0 kubenswrapper[6976]: I0318 08:58:53.448065 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 08:58:53.450921 master-0 kubenswrapper[6976]: I0318 08:58:53.450865 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_5ca7b84e-0aff-4526-948a-03492712ff8f/installer/0.log" Mar 18 08:58:53.451056 master-0 kubenswrapper[6976]: I0318 08:58:53.450951 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"5ca7b84e-0aff-4526-948a-03492712ff8f","Type":"ContainerDied","Data":"b023e92f57d6773ebf2508c0ed8826a189d16751fde08444987f384bb9579093"} Mar 18 08:58:53.451056 master-0 kubenswrapper[6976]: I0318 08:58:53.450990 6976 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b023e92f57d6773ebf2508c0ed8826a189d16751fde08444987f384bb9579093" Mar 18 08:58:53.451285 master-0 kubenswrapper[6976]: I0318 08:58:53.451064 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 08:58:59.310620 master-0 kubenswrapper[6976]: E0318 08:58:59.310498 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:59:01.559435 master-0 kubenswrapper[6976]: E0318 08:59:01.559344 6976 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:59:01.560671 master-0 kubenswrapper[6976]: I0318 08:59:01.560267 6976 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 18 08:59:02.599297 master-0 kubenswrapper[6976]: I0318 08:59:02.599142 6976 scope.go:117] "RemoveContainer" containerID="5d936f20024a27505a56975f151b7c3bf1da50cb1c5e184e3c0c3840e435fca8" Mar 18 08:59:02.600091 master-0 kubenswrapper[6976]: E0318 08:59:02.599554 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 08:59:08.570062 master-0 kubenswrapper[6976]: I0318 08:59:08.570016 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-4cxfh_bf7a3329-a04c-4b58-9364-b907c00cbe08/ingress-operator/3.log" Mar 18 08:59:08.570697 master-0 kubenswrapper[6976]: I0318 08:59:08.570657 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-4cxfh_bf7a3329-a04c-4b58-9364-b907c00cbe08/ingress-operator/2.log" Mar 18 08:59:08.571076 master-0 kubenswrapper[6976]: I0318 08:59:08.571034 6976 generic.go:334] "Generic (PLEG): container finished" podID="bf7a3329-a04c-4b58-9364-b907c00cbe08" containerID="736d8fa2eb3b5d4bf2fa6ebaf328e5d0f20bdff1b6da32fa492c4e843bc10226" exitCode=1 Mar 18 08:59:08.571120 master-0 kubenswrapper[6976]: I0318 08:59:08.571094 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" event={"ID":"bf7a3329-a04c-4b58-9364-b907c00cbe08","Type":"ContainerDied","Data":"736d8fa2eb3b5d4bf2fa6ebaf328e5d0f20bdff1b6da32fa492c4e843bc10226"} Mar 18 08:59:08.571151 master-0 kubenswrapper[6976]: I0318 08:59:08.571141 6976 scope.go:117] "RemoveContainer" containerID="4288f0a281b19c9f93fcb8b8d7e439e4c34597fa12a429e7eb6e155e31d87b19" Mar 18 08:59:08.571993 master-0 kubenswrapper[6976]: I0318 08:59:08.571965 6976 scope.go:117] "RemoveContainer" containerID="736d8fa2eb3b5d4bf2fa6ebaf328e5d0f20bdff1b6da32fa492c4e843bc10226" Mar 18 08:59:08.572533 master-0 kubenswrapper[6976]: E0318 08:59:08.572490 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-4cxfh_openshift-ingress-operator(bf7a3329-a04c-4b58-9364-b907c00cbe08)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" podUID="bf7a3329-a04c-4b58-9364-b907c00cbe08" Mar 18 08:59:09.310980 master-0 kubenswrapper[6976]: E0318 08:59:09.310905 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 08:59:09.310980 master-0 kubenswrapper[6976]: E0318 08:59:09.310958 6976 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 08:59:09.580916 master-0 kubenswrapper[6976]: I0318 08:59:09.580746 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-lf7kq_57affd8b-d1ce-40d2-b31e-7b18645ca7b6/approver/1.log" Mar 18 08:59:09.582013 master-0 kubenswrapper[6976]: I0318 08:59:09.581417 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-lf7kq_57affd8b-d1ce-40d2-b31e-7b18645ca7b6/approver/0.log" Mar 18 08:59:09.582013 master-0 kubenswrapper[6976]: I0318 08:59:09.581863 6976 generic.go:334] "Generic (PLEG): container finished" podID="57affd8b-d1ce-40d2-b31e-7b18645ca7b6" containerID="8adfaf98ac3f7666cf99c8210bf62f09cc200963ab9628e3f3b8887a2ea80d44" exitCode=1 Mar 18 08:59:09.582013 master-0 kubenswrapper[6976]: I0318 08:59:09.581909 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-lf7kq" event={"ID":"57affd8b-d1ce-40d2-b31e-7b18645ca7b6","Type":"ContainerDied","Data":"8adfaf98ac3f7666cf99c8210bf62f09cc200963ab9628e3f3b8887a2ea80d44"} Mar 18 08:59:09.582013 master-0 kubenswrapper[6976]: I0318 08:59:09.581966 6976 scope.go:117] "RemoveContainer" containerID="7a5f71287e8b5eb717808046e6ba2bfb7e60eb4819b757b6fc0b37c1ed02f420" Mar 18 08:59:09.582733 master-0 kubenswrapper[6976]: I0318 08:59:09.582685 6976 scope.go:117] "RemoveContainer" containerID="8adfaf98ac3f7666cf99c8210bf62f09cc200963ab9628e3f3b8887a2ea80d44" Mar 18 08:59:09.583022 master-0 kubenswrapper[6976]: E0318 08:59:09.582980 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"approver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=approver pod=network-node-identity-lf7kq_openshift-network-node-identity(57affd8b-d1ce-40d2-b31e-7b18645ca7b6)\"" pod="openshift-network-node-identity/network-node-identity-lf7kq" podUID="57affd8b-d1ce-40d2-b31e-7b18645ca7b6" Mar 18 08:59:09.585228 master-0 kubenswrapper[6976]: I0318 08:59:09.585163 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-4cxfh_bf7a3329-a04c-4b58-9364-b907c00cbe08/ingress-operator/3.log" Mar 18 08:59:10.595495 master-0 kubenswrapper[6976]: I0318 08:59:10.595450 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-lf7kq_57affd8b-d1ce-40d2-b31e-7b18645ca7b6/approver/1.log" Mar 18 08:59:10.607773 master-0 kubenswrapper[6976]: I0318 08:59:10.607693 6976 status_manager.go:851] "Failed to get status for pod" podUID="24b4ed170d527099878cb5fdd508a2fb" pod="openshift-etcd/etcd-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)" Mar 18 08:59:11.561533 master-0 kubenswrapper[6976]: E0318 08:59:11.561402 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 18 08:59:14.598352 master-0 kubenswrapper[6976]: I0318 08:59:14.598280 6976 scope.go:117] "RemoveContainer" containerID="5d936f20024a27505a56975f151b7c3bf1da50cb1c5e184e3c0c3840e435fca8" Mar 18 08:59:14.636993 master-0 kubenswrapper[6976]: I0318 08:59:14.636863 6976 generic.go:334] "Generic (PLEG): container finished" podID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerID="8822d8d1cd61ab70d73bc23715778ff88e202eedade5838abd00a7ee1f05085e" exitCode=0 Mar 18 08:59:14.636993 master-0 kubenswrapper[6976]: I0318 08:59:14.636952 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" event={"ID":"93cb5ef1-e8f1-4d11-8c93-1abf24626176","Type":"ContainerDied","Data":"8822d8d1cd61ab70d73bc23715778ff88e202eedade5838abd00a7ee1f05085e"} Mar 18 08:59:14.637406 master-0 kubenswrapper[6976]: I0318 08:59:14.637021 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" event={"ID":"93cb5ef1-e8f1-4d11-8c93-1abf24626176","Type":"ContainerStarted","Data":"dbc1cb6940e9efff07d651c65a18c59c674dd8bccc10c54e3755e80079c9084e"} Mar 18 08:59:14.637406 master-0 kubenswrapper[6976]: I0318 08:59:14.637064 6976 scope.go:117] "RemoveContainer" containerID="ea2c5251f8b00aeeac7b68834229738af66c558b5a20fbe3cc0b6efb0ce7e30a" Mar 18 08:59:15.278380 master-0 kubenswrapper[6976]: I0318 08:59:15.278321 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:59:15.278555 master-0 kubenswrapper[6976]: I0318 08:59:15.278390 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 08:59:15.281282 master-0 kubenswrapper[6976]: I0318 08:59:15.281227 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:15.281282 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:15.281282 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:15.281282 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:15.281509 master-0 kubenswrapper[6976]: I0318 08:59:15.281479 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:15.649459 master-0 kubenswrapper[6976]: I0318 08:59:15.649309 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"27d587c7891abbfb93354b414b8f680dfa9657b70ef3b27da5fccf707326fa1a"} Mar 18 08:59:16.281070 master-0 kubenswrapper[6976]: I0318 08:59:16.280991 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:16.281070 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:16.281070 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:16.281070 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:16.281528 master-0 kubenswrapper[6976]: I0318 08:59:16.281081 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:16.956411 master-0 kubenswrapper[6976]: E0318 08:59:16.956229 6976 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Mar 18 08:59:16.956411 master-0 kubenswrapper[6976]: &Event{ObjectMeta:{router-default-7dcf5569b5-sgsmn.189de38d11aad472 openshift-ingress 10796 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress,Name:router-default-7dcf5569b5-sgsmn,UID:93cb5ef1-e8f1-4d11-8c93-1abf24626176,APIVersion:v1,ResourceVersion:10172,FieldPath:spec.containers{router},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Mar 18 08:59:16.956411 master-0 kubenswrapper[6976]: body: [-]backend-http failed: reason withheld Mar 18 08:59:16.956411 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:16.956411 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:16.956411 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:16.956411 master-0 kubenswrapper[6976]: Mar 18 08:59:16.956411 master-0 kubenswrapper[6976]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:53:43 +0000 UTC,LastTimestamp:2026-03-18 08:58:09.28063078 +0000 UTC m=+588.866232415,Count:221,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Mar 18 08:59:16.956411 master-0 kubenswrapper[6976]: > Mar 18 08:59:17.281355 master-0 kubenswrapper[6976]: I0318 08:59:17.281210 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:17.281355 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:17.281355 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:17.281355 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:17.281945 master-0 kubenswrapper[6976]: I0318 08:59:17.281902 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:18.280189 master-0 kubenswrapper[6976]: I0318 08:59:18.280132 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:18.280189 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:18.280189 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:18.280189 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:18.281450 master-0 kubenswrapper[6976]: I0318 08:59:18.280985 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:19.279827 master-0 kubenswrapper[6976]: I0318 08:59:19.279723 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:19.279827 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:19.279827 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:19.279827 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:19.280846 master-0 kubenswrapper[6976]: I0318 08:59:19.279853 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:20.280419 master-0 kubenswrapper[6976]: I0318 08:59:20.280304 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:20.280419 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:20.280419 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:20.280419 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:20.280419 master-0 kubenswrapper[6976]: I0318 08:59:20.280398 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:21.280994 master-0 kubenswrapper[6976]: I0318 08:59:21.280726 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:21.280994 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:21.280994 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:21.280994 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:21.282007 master-0 kubenswrapper[6976]: I0318 08:59:21.281000 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:21.763113 master-0 kubenswrapper[6976]: E0318 08:59:21.762995 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 18 08:59:22.280046 master-0 kubenswrapper[6976]: I0318 08:59:22.279959 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:22.280046 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:22.280046 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:22.280046 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:22.280370 master-0 kubenswrapper[6976]: I0318 08:59:22.280093 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:23.280826 master-0 kubenswrapper[6976]: I0318 08:59:23.280742 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:23.280826 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:23.280826 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:23.280826 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:23.281780 master-0 kubenswrapper[6976]: I0318 08:59:23.280844 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:23.565353 master-0 kubenswrapper[6976]: I0318 08:59:23.565228 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:59:23.598358 master-0 kubenswrapper[6976]: I0318 08:59:23.598264 6976 scope.go:117] "RemoveContainer" containerID="736d8fa2eb3b5d4bf2fa6ebaf328e5d0f20bdff1b6da32fa492c4e843bc10226" Mar 18 08:59:23.598835 master-0 kubenswrapper[6976]: E0318 08:59:23.598767 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-4cxfh_openshift-ingress-operator(bf7a3329-a04c-4b58-9364-b907c00cbe08)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" podUID="bf7a3329-a04c-4b58-9364-b907c00cbe08" Mar 18 08:59:23.784374 master-0 kubenswrapper[6976]: I0318 08:59:23.784203 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:59:24.280098 master-0 kubenswrapper[6976]: I0318 08:59:24.280005 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:24.280098 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:24.280098 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:24.280098 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:24.280441 master-0 kubenswrapper[6976]: I0318 08:59:24.280131 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:24.599151 master-0 kubenswrapper[6976]: I0318 08:59:24.598970 6976 scope.go:117] "RemoveContainer" containerID="8adfaf98ac3f7666cf99c8210bf62f09cc200963ab9628e3f3b8887a2ea80d44" Mar 18 08:59:24.637544 master-0 kubenswrapper[6976]: E0318 08:59:24.637496 6976 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 08:59:24.638147 master-0 kubenswrapper[6976]: I0318 08:59:24.638132 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 18 08:59:24.672506 master-0 kubenswrapper[6976]: W0318 08:59:24.672464 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod094204df314fe45bd5af12ca1b4622bb.slice/crio-7f67914817f6c7225b0d1e0411ea65a3f10d85891389f6bc9dcac2e2540a11b6 WatchSource:0}: Error finding container 7f67914817f6c7225b0d1e0411ea65a3f10d85891389f6bc9dcac2e2540a11b6: Status 404 returned error can't find the container with id 7f67914817f6c7225b0d1e0411ea65a3f10d85891389f6bc9dcac2e2540a11b6 Mar 18 08:59:24.724465 master-0 kubenswrapper[6976]: I0318 08:59:24.724393 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"7f67914817f6c7225b0d1e0411ea65a3f10d85891389f6bc9dcac2e2540a11b6"} Mar 18 08:59:25.279602 master-0 kubenswrapper[6976]: I0318 08:59:25.279534 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:25.279602 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:25.279602 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:25.279602 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:25.279858 master-0 kubenswrapper[6976]: I0318 08:59:25.279622 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:25.736042 master-0 kubenswrapper[6976]: I0318 08:59:25.735982 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-lf7kq_57affd8b-d1ce-40d2-b31e-7b18645ca7b6/approver/1.log" Mar 18 08:59:25.737887 master-0 kubenswrapper[6976]: I0318 08:59:25.737814 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-lf7kq" event={"ID":"57affd8b-d1ce-40d2-b31e-7b18645ca7b6","Type":"ContainerStarted","Data":"196868b6ba00b43679563648494ebdbbc20088dc020a33b216f712c06e51560e"} Mar 18 08:59:25.740789 master-0 kubenswrapper[6976]: I0318 08:59:25.740753 6976 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="e9c6441b6451eb8d4f18b81edc159711a0094c083c79128b3e30069808890f14" exitCode=0 Mar 18 08:59:25.740990 master-0 kubenswrapper[6976]: I0318 08:59:25.740828 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"e9c6441b6451eb8d4f18b81edc159711a0094c083c79128b3e30069808890f14"} Mar 18 08:59:25.741286 master-0 kubenswrapper[6976]: I0318 08:59:25.741240 6976 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="a9391117-4261-4eba-b3f6-0d77562a8375" Mar 18 08:59:25.741393 master-0 kubenswrapper[6976]: I0318 08:59:25.741290 6976 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="a9391117-4261-4eba-b3f6-0d77562a8375" Mar 18 08:59:26.280392 master-0 kubenswrapper[6976]: I0318 08:59:26.280341 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:26.280392 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:26.280392 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:26.280392 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:26.280924 master-0 kubenswrapper[6976]: I0318 08:59:26.280398 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:26.784562 master-0 kubenswrapper[6976]: I0318 08:59:26.784497 6976 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 08:59:27.280419 master-0 kubenswrapper[6976]: I0318 08:59:27.280328 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:27.280419 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:27.280419 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:27.280419 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:27.280990 master-0 kubenswrapper[6976]: I0318 08:59:27.280444 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:28.279713 master-0 kubenswrapper[6976]: I0318 08:59:28.279493 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:28.279713 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:28.279713 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:28.279713 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:28.279713 master-0 kubenswrapper[6976]: I0318 08:59:28.279626 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:29.280114 master-0 kubenswrapper[6976]: I0318 08:59:29.280030 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:29.280114 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:29.280114 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:29.280114 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:29.280751 master-0 kubenswrapper[6976]: I0318 08:59:29.280121 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:30.280226 master-0 kubenswrapper[6976]: I0318 08:59:30.280170 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:30.280226 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:30.280226 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:30.280226 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:30.281367 master-0 kubenswrapper[6976]: I0318 08:59:30.281002 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:31.280539 master-0 kubenswrapper[6976]: I0318 08:59:31.280446 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:31.280539 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:31.280539 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:31.280539 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:31.280539 master-0 kubenswrapper[6976]: I0318 08:59:31.280536 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:32.165547 master-0 kubenswrapper[6976]: E0318 08:59:32.165110 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 18 08:59:32.280743 master-0 kubenswrapper[6976]: I0318 08:59:32.280634 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:32.280743 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:32.280743 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:32.280743 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:32.281656 master-0 kubenswrapper[6976]: I0318 08:59:32.280738 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:33.280889 master-0 kubenswrapper[6976]: I0318 08:59:33.280788 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:33.280889 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:33.280889 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:33.280889 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:33.282008 master-0 kubenswrapper[6976]: I0318 08:59:33.280925 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:34.279998 master-0 kubenswrapper[6976]: I0318 08:59:34.279881 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:34.279998 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:34.279998 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:34.279998 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:34.279998 master-0 kubenswrapper[6976]: I0318 08:59:34.279967 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:34.599373 master-0 kubenswrapper[6976]: I0318 08:59:34.599161 6976 scope.go:117] "RemoveContainer" containerID="736d8fa2eb3b5d4bf2fa6ebaf328e5d0f20bdff1b6da32fa492c4e843bc10226" Mar 18 08:59:34.600276 master-0 kubenswrapper[6976]: E0318 08:59:34.599613 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-4cxfh_openshift-ingress-operator(bf7a3329-a04c-4b58-9364-b907c00cbe08)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" podUID="bf7a3329-a04c-4b58-9364-b907c00cbe08" Mar 18 08:59:35.280963 master-0 kubenswrapper[6976]: I0318 08:59:35.280849 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:35.280963 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:35.280963 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:35.280963 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:35.281409 master-0 kubenswrapper[6976]: I0318 08:59:35.280954 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:36.280683 master-0 kubenswrapper[6976]: I0318 08:59:36.280593 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:36.280683 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:36.280683 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:36.280683 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:36.281535 master-0 kubenswrapper[6976]: I0318 08:59:36.280690 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:36.783803 master-0 kubenswrapper[6976]: I0318 08:59:36.783697 6976 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 08:59:37.280927 master-0 kubenswrapper[6976]: I0318 08:59:37.280851 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:37.280927 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:37.280927 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:37.280927 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:37.281632 master-0 kubenswrapper[6976]: I0318 08:59:37.280939 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:38.280375 master-0 kubenswrapper[6976]: I0318 08:59:38.280289 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:38.280375 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:38.280375 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:38.280375 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:38.280979 master-0 kubenswrapper[6976]: I0318 08:59:38.280394 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:39.280195 master-0 kubenswrapper[6976]: I0318 08:59:39.280104 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:39.280195 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:39.280195 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:39.280195 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:39.280195 master-0 kubenswrapper[6976]: I0318 08:59:39.280187 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:40.280320 master-0 kubenswrapper[6976]: I0318 08:59:40.280209 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:40.280320 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:40.280320 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:40.280320 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:40.280320 master-0 kubenswrapper[6976]: I0318 08:59:40.280307 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:41.281295 master-0 kubenswrapper[6976]: I0318 08:59:41.281183 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:41.281295 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:41.281295 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:41.281295 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:41.281295 master-0 kubenswrapper[6976]: I0318 08:59:41.281284 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:42.280022 master-0 kubenswrapper[6976]: I0318 08:59:42.279905 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:42.280022 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:42.280022 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:42.280022 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:42.280022 master-0 kubenswrapper[6976]: I0318 08:59:42.280013 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:42.967337 master-0 kubenswrapper[6976]: E0318 08:59:42.967212 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 18 08:59:43.280267 master-0 kubenswrapper[6976]: I0318 08:59:43.280057 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:43.280267 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:43.280267 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:43.280267 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:43.280267 master-0 kubenswrapper[6976]: I0318 08:59:43.280149 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:44.280026 master-0 kubenswrapper[6976]: I0318 08:59:44.279923 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:44.280026 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:44.280026 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:44.280026 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:44.280026 master-0 kubenswrapper[6976]: I0318 08:59:44.279992 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:45.280271 master-0 kubenswrapper[6976]: I0318 08:59:45.280183 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:45.280271 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:45.280271 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:45.280271 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:45.281345 master-0 kubenswrapper[6976]: I0318 08:59:45.280290 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:45.599095 master-0 kubenswrapper[6976]: I0318 08:59:45.598913 6976 scope.go:117] "RemoveContainer" containerID="736d8fa2eb3b5d4bf2fa6ebaf328e5d0f20bdff1b6da32fa492c4e843bc10226" Mar 18 08:59:45.599366 master-0 kubenswrapper[6976]: E0318 08:59:45.599326 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-4cxfh_openshift-ingress-operator(bf7a3329-a04c-4b58-9364-b907c00cbe08)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" podUID="bf7a3329-a04c-4b58-9364-b907c00cbe08" Mar 18 08:59:46.280866 master-0 kubenswrapper[6976]: I0318 08:59:46.280766 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:46.280866 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:46.280866 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:46.280866 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:46.281955 master-0 kubenswrapper[6976]: I0318 08:59:46.280883 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:46.784836 master-0 kubenswrapper[6976]: I0318 08:59:46.784700 6976 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 08:59:46.785149 master-0 kubenswrapper[6976]: I0318 08:59:46.784876 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:59:46.785854 master-0 kubenswrapper[6976]: I0318 08:59:46.785799 6976 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"27d587c7891abbfb93354b414b8f680dfa9657b70ef3b27da5fccf707326fa1a"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 18 08:59:46.786196 master-0 kubenswrapper[6976]: I0318 08:59:46.786148 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" containerID="cri-o://27d587c7891abbfb93354b414b8f680dfa9657b70ef3b27da5fccf707326fa1a" gracePeriod=30 Mar 18 08:59:47.279688 master-0 kubenswrapper[6976]: I0318 08:59:47.279627 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:47.279688 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:47.279688 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:47.279688 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:47.280017 master-0 kubenswrapper[6976]: I0318 08:59:47.279706 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:47.913966 master-0 kubenswrapper[6976]: I0318 08:59:47.913853 6976 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="27d587c7891abbfb93354b414b8f680dfa9657b70ef3b27da5fccf707326fa1a" exitCode=2 Mar 18 08:59:47.915095 master-0 kubenswrapper[6976]: I0318 08:59:47.913973 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"27d587c7891abbfb93354b414b8f680dfa9657b70ef3b27da5fccf707326fa1a"} Mar 18 08:59:47.915095 master-0 kubenswrapper[6976]: I0318 08:59:47.914063 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"3251669fc25c1285249f7f12096305de8608ba905c5c140d1ecff87834122e35"} Mar 18 08:59:47.915095 master-0 kubenswrapper[6976]: I0318 08:59:47.914095 6976 scope.go:117] "RemoveContainer" containerID="5d936f20024a27505a56975f151b7c3bf1da50cb1c5e184e3c0c3840e435fca8" Mar 18 08:59:48.280694 master-0 kubenswrapper[6976]: I0318 08:59:48.280622 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:48.280694 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:48.280694 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:48.280694 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:48.281040 master-0 kubenswrapper[6976]: I0318 08:59:48.280733 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:49.279993 master-0 kubenswrapper[6976]: I0318 08:59:49.279935 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:49.279993 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:49.279993 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:49.279993 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:49.280606 master-0 kubenswrapper[6976]: I0318 08:59:49.280021 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:50.280097 master-0 kubenswrapper[6976]: I0318 08:59:50.280020 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:50.280097 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:50.280097 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:50.280097 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:50.281206 master-0 kubenswrapper[6976]: I0318 08:59:50.280108 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:50.963495 master-0 kubenswrapper[6976]: E0318 08:59:50.963287 6976 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de3ce772f8d22 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:58:24.15621661 +0000 UTC m=+603.741818235,LastTimestamp:2026-03-18 08:58:24.15621661 +0000 UTC m=+603.741818235,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 08:59:51.280561 master-0 kubenswrapper[6976]: I0318 08:59:51.280389 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:51.280561 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:51.280561 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:51.280561 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:51.280561 master-0 kubenswrapper[6976]: I0318 08:59:51.280495 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:52.280941 master-0 kubenswrapper[6976]: I0318 08:59:52.280881 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:52.280941 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:52.280941 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:52.280941 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:52.281881 master-0 kubenswrapper[6976]: I0318 08:59:52.281665 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:53.280094 master-0 kubenswrapper[6976]: I0318 08:59:53.279992 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:53.280094 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:53.280094 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:53.280094 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:53.280793 master-0 kubenswrapper[6976]: I0318 08:59:53.280109 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:53.565951 master-0 kubenswrapper[6976]: I0318 08:59:53.565794 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:59:53.783750 master-0 kubenswrapper[6976]: I0318 08:59:53.783642 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 08:59:54.279809 master-0 kubenswrapper[6976]: I0318 08:59:54.279754 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:54.279809 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:54.279809 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:54.279809 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:54.280366 master-0 kubenswrapper[6976]: I0318 08:59:54.280323 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:54.568922 master-0 kubenswrapper[6976]: E0318 08:59:54.568622 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 18 08:59:55.279881 master-0 kubenswrapper[6976]: I0318 08:59:55.279783 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:55.279881 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:55.279881 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:55.279881 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:55.279881 master-0 kubenswrapper[6976]: I0318 08:59:55.279851 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:56.282649 master-0 kubenswrapper[6976]: I0318 08:59:56.282583 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:56.282649 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:56.282649 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:56.282649 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:56.282649 master-0 kubenswrapper[6976]: I0318 08:59:56.282671 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:56.784133 master-0 kubenswrapper[6976]: I0318 08:59:56.784015 6976 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 08:59:57.280062 master-0 kubenswrapper[6976]: I0318 08:59:57.279973 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:57.280062 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:57.280062 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:57.280062 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:57.280502 master-0 kubenswrapper[6976]: I0318 08:59:57.280088 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:58.281808 master-0 kubenswrapper[6976]: I0318 08:59:58.281674 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:58.281808 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:58.281808 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:58.281808 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:58.281808 master-0 kubenswrapper[6976]: I0318 08:59:58.281790 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:59.280784 master-0 kubenswrapper[6976]: I0318 08:59:59.280685 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 08:59:59.280784 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 08:59:59.280784 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 08:59:59.280784 master-0 kubenswrapper[6976]: healthz check failed Mar 18 08:59:59.282927 master-0 kubenswrapper[6976]: I0318 08:59:59.280808 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 08:59:59.598749 master-0 kubenswrapper[6976]: I0318 08:59:59.598521 6976 scope.go:117] "RemoveContainer" containerID="736d8fa2eb3b5d4bf2fa6ebaf328e5d0f20bdff1b6da32fa492c4e843bc10226" Mar 18 08:59:59.744243 master-0 kubenswrapper[6976]: E0318 08:59:59.744195 6976 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 09:00:00.006683 master-0 kubenswrapper[6976]: I0318 09:00:00.006641 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-4cxfh_bf7a3329-a04c-4b58-9364-b907c00cbe08/ingress-operator/3.log" Mar 18 09:00:00.007176 master-0 kubenswrapper[6976]: I0318 09:00:00.007103 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" event={"ID":"bf7a3329-a04c-4b58-9364-b907c00cbe08","Type":"ContainerStarted","Data":"0914d593ef977819ab7dc6ab7b6e7409b6b30b0704e79df20d36fbbb266a5b50"} Mar 18 09:00:00.279414 master-0 kubenswrapper[6976]: I0318 09:00:00.279210 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:00.279414 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:00.279414 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:00.279414 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:00.279414 master-0 kubenswrapper[6976]: I0318 09:00:00.279398 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:01.015627 master-0 kubenswrapper[6976]: I0318 09:00:01.015527 6976 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="c26eb3bf03b5fe4ebeece6b8722b565a3875e9cd3bc4e444bee1b43372467a32" exitCode=0 Mar 18 09:00:01.015627 master-0 kubenswrapper[6976]: I0318 09:00:01.015633 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"c26eb3bf03b5fe4ebeece6b8722b565a3875e9cd3bc4e444bee1b43372467a32"} Mar 18 09:00:01.016610 master-0 kubenswrapper[6976]: I0318 09:00:01.015858 6976 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="a9391117-4261-4eba-b3f6-0d77562a8375" Mar 18 09:00:01.016610 master-0 kubenswrapper[6976]: I0318 09:00:01.015875 6976 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="a9391117-4261-4eba-b3f6-0d77562a8375" Mar 18 09:00:01.281240 master-0 kubenswrapper[6976]: I0318 09:00:01.281079 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:01.281240 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:01.281240 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:01.281240 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:01.281240 master-0 kubenswrapper[6976]: I0318 09:00:01.281177 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:02.280951 master-0 kubenswrapper[6976]: I0318 09:00:02.280827 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:02.280951 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:02.280951 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:02.280951 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:02.280951 master-0 kubenswrapper[6976]: I0318 09:00:02.280919 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:03.279981 master-0 kubenswrapper[6976]: I0318 09:00:03.279872 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:03.279981 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:03.279981 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:03.279981 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:03.279981 master-0 kubenswrapper[6976]: I0318 09:00:03.279969 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:04.043791 master-0 kubenswrapper[6976]: I0318 09:00:04.043713 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vwqc4_94e2a8f0-2c2e-43da-9fa9-69edfcd77830/cluster-cloud-controller-manager/0.log" Mar 18 09:00:04.043791 master-0 kubenswrapper[6976]: I0318 09:00:04.043766 6976 generic.go:334] "Generic (PLEG): container finished" podID="94e2a8f0-2c2e-43da-9fa9-69edfcd77830" containerID="77222f1857306a427ed0136d01e66abea08222205dcb9a92415c3629bd81b945" exitCode=1 Mar 18 09:00:04.044783 master-0 kubenswrapper[6976]: I0318 09:00:04.043823 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" event={"ID":"94e2a8f0-2c2e-43da-9fa9-69edfcd77830","Type":"ContainerDied","Data":"77222f1857306a427ed0136d01e66abea08222205dcb9a92415c3629bd81b945"} Mar 18 09:00:04.044783 master-0 kubenswrapper[6976]: I0318 09:00:04.044314 6976 scope.go:117] "RemoveContainer" containerID="77222f1857306a427ed0136d01e66abea08222205dcb9a92415c3629bd81b945" Mar 18 09:00:04.046804 master-0 kubenswrapper[6976]: I0318 09:00:04.046742 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qnc62_4e919445-81d0-4663-8941-f596d8121305/snapshot-controller/1.log" Mar 18 09:00:04.047336 master-0 kubenswrapper[6976]: I0318 09:00:04.047225 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qnc62_4e919445-81d0-4663-8941-f596d8121305/snapshot-controller/0.log" Mar 18 09:00:04.047336 master-0 kubenswrapper[6976]: I0318 09:00:04.047279 6976 generic.go:334] "Generic (PLEG): container finished" podID="4e919445-81d0-4663-8941-f596d8121305" containerID="846d9dc4a6c1b4a6bf039195850d60f812737e3d5e44c652f1e1634888edfe9d" exitCode=1 Mar 18 09:00:04.047336 master-0 kubenswrapper[6976]: I0318 09:00:04.047318 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" event={"ID":"4e919445-81d0-4663-8941-f596d8121305","Type":"ContainerDied","Data":"846d9dc4a6c1b4a6bf039195850d60f812737e3d5e44c652f1e1634888edfe9d"} Mar 18 09:00:04.047616 master-0 kubenswrapper[6976]: I0318 09:00:04.047377 6976 scope.go:117] "RemoveContainer" containerID="b7023722fb31c9ade901bb4f5f5537f159e85f319ef882c910c37283dbf679ec" Mar 18 09:00:04.048603 master-0 kubenswrapper[6976]: I0318 09:00:04.048280 6976 scope.go:117] "RemoveContainer" containerID="846d9dc4a6c1b4a6bf039195850d60f812737e3d5e44c652f1e1634888edfe9d" Mar 18 09:00:04.048787 master-0 kubenswrapper[6976]: E0318 09:00:04.048727 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-qnc62_openshift-cluster-storage-operator(4e919445-81d0-4663-8941-f596d8121305)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" podUID="4e919445-81d0-4663-8941-f596d8121305" Mar 18 09:00:04.280195 master-0 kubenswrapper[6976]: I0318 09:00:04.280131 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:04.280195 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:04.280195 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:04.280195 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:04.280451 master-0 kubenswrapper[6976]: I0318 09:00:04.280201 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:05.056417 master-0 kubenswrapper[6976]: I0318 09:00:05.056355 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vwqc4_94e2a8f0-2c2e-43da-9fa9-69edfcd77830/cluster-cloud-controller-manager/0.log" Mar 18 09:00:05.057063 master-0 kubenswrapper[6976]: I0318 09:00:05.056469 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" event={"ID":"94e2a8f0-2c2e-43da-9fa9-69edfcd77830","Type":"ContainerStarted","Data":"ecb41a0c454739e7867636af86a7a8205d71ad4c3b3f9260127598e7b32e96cb"} Mar 18 09:00:05.058208 master-0 kubenswrapper[6976]: I0318 09:00:05.058180 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qnc62_4e919445-81d0-4663-8941-f596d8121305/snapshot-controller/1.log" Mar 18 09:00:05.280643 master-0 kubenswrapper[6976]: I0318 09:00:05.280520 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:05.280643 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:05.280643 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:05.280643 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:05.281140 master-0 kubenswrapper[6976]: I0318 09:00:05.280669 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:06.280198 master-0 kubenswrapper[6976]: I0318 09:00:06.280117 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:06.280198 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:06.280198 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:06.280198 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:06.281128 master-0 kubenswrapper[6976]: I0318 09:00:06.280224 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:06.784333 master-0 kubenswrapper[6976]: I0318 09:00:06.784221 6976 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:00:07.074963 master-0 kubenswrapper[6976]: I0318 09:00:07.074877 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-xfqsm_800297fe-77fd-4f58-ade2-32a147cd7d5c/manager/1.log" Mar 18 09:00:07.076676 master-0 kubenswrapper[6976]: I0318 09:00:07.076633 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-xfqsm_800297fe-77fd-4f58-ade2-32a147cd7d5c/manager/0.log" Mar 18 09:00:07.076753 master-0 kubenswrapper[6976]: I0318 09:00:07.076717 6976 generic.go:334] "Generic (PLEG): container finished" podID="800297fe-77fd-4f58-ade2-32a147cd7d5c" containerID="9fa57acf7d89fed72b41cf833947aeeae5bc2aa09219f68d237536250d7030f8" exitCode=1 Mar 18 09:00:07.076796 master-0 kubenswrapper[6976]: I0318 09:00:07.076761 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" event={"ID":"800297fe-77fd-4f58-ade2-32a147cd7d5c","Type":"ContainerDied","Data":"9fa57acf7d89fed72b41cf833947aeeae5bc2aa09219f68d237536250d7030f8"} Mar 18 09:00:07.076835 master-0 kubenswrapper[6976]: I0318 09:00:07.076810 6976 scope.go:117] "RemoveContainer" containerID="bc52f72875ab784115d2ae7cf81aabfc20eff1b537ca6458d743902aaf4541e4" Mar 18 09:00:07.077548 master-0 kubenswrapper[6976]: I0318 09:00:07.077510 6976 scope.go:117] "RemoveContainer" containerID="9fa57acf7d89fed72b41cf833947aeeae5bc2aa09219f68d237536250d7030f8" Mar 18 09:00:07.077936 master-0 kubenswrapper[6976]: E0318 09:00:07.077906 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-57777556ff-xfqsm_openshift-operator-controller(800297fe-77fd-4f58-ade2-32a147cd7d5c)\"" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" podUID="800297fe-77fd-4f58-ade2-32a147cd7d5c" Mar 18 09:00:07.281024 master-0 kubenswrapper[6976]: I0318 09:00:07.280929 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:07.281024 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:07.281024 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:07.281024 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:07.281996 master-0 kubenswrapper[6976]: I0318 09:00:07.281028 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:07.771829 master-0 kubenswrapper[6976]: E0318 09:00:07.771429 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 18 09:00:08.087585 master-0 kubenswrapper[6976]: I0318 09:00:08.087438 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-xfqsm_800297fe-77fd-4f58-ade2-32a147cd7d5c/manager/1.log" Mar 18 09:00:08.280738 master-0 kubenswrapper[6976]: I0318 09:00:08.280664 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:08.280738 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:08.280738 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:08.280738 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:08.281054 master-0 kubenswrapper[6976]: I0318 09:00:08.280754 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:09.280126 master-0 kubenswrapper[6976]: I0318 09:00:09.280065 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:09.280126 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:09.280126 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:09.280126 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:09.280484 master-0 kubenswrapper[6976]: I0318 09:00:09.280140 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:10.108208 master-0 kubenswrapper[6976]: I0318 09:00:10.108079 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vwqc4_94e2a8f0-2c2e-43da-9fa9-69edfcd77830/config-sync-controllers/0.log" Mar 18 09:00:10.109114 master-0 kubenswrapper[6976]: I0318 09:00:10.108973 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vwqc4_94e2a8f0-2c2e-43da-9fa9-69edfcd77830/cluster-cloud-controller-manager/0.log" Mar 18 09:00:10.109114 master-0 kubenswrapper[6976]: I0318 09:00:10.109036 6976 generic.go:334] "Generic (PLEG): container finished" podID="94e2a8f0-2c2e-43da-9fa9-69edfcd77830" containerID="096ac353f933435e5c018fb15b66b68ffb3a1e47071e3f93549e3c9af4316fb4" exitCode=1 Mar 18 09:00:10.109114 master-0 kubenswrapper[6976]: I0318 09:00:10.109079 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" event={"ID":"94e2a8f0-2c2e-43da-9fa9-69edfcd77830","Type":"ContainerDied","Data":"096ac353f933435e5c018fb15b66b68ffb3a1e47071e3f93549e3c9af4316fb4"} Mar 18 09:00:10.109855 master-0 kubenswrapper[6976]: I0318 09:00:10.109802 6976 scope.go:117] "RemoveContainer" containerID="096ac353f933435e5c018fb15b66b68ffb3a1e47071e3f93549e3c9af4316fb4" Mar 18 09:00:10.282187 master-0 kubenswrapper[6976]: I0318 09:00:10.282142 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:10.282187 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:10.282187 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:10.282187 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:10.282462 master-0 kubenswrapper[6976]: I0318 09:00:10.282221 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:10.624198 master-0 kubenswrapper[6976]: I0318 09:00:10.623976 6976 status_manager.go:851] "Failed to get status for pod" podUID="bf7a3329-a04c-4b58-9364-b907c00cbe08" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods ingress-operator-66b84d69b-4cxfh)" Mar 18 09:00:11.122365 master-0 kubenswrapper[6976]: I0318 09:00:11.122241 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-vbxdw_411d544f-e105-44f0-927a-f61406b3f070/manager/1.log" Mar 18 09:00:11.123420 master-0 kubenswrapper[6976]: I0318 09:00:11.123366 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-vbxdw_411d544f-e105-44f0-927a-f61406b3f070/manager/0.log" Mar 18 09:00:11.124466 master-0 kubenswrapper[6976]: I0318 09:00:11.124385 6976 generic.go:334] "Generic (PLEG): container finished" podID="411d544f-e105-44f0-927a-f61406b3f070" containerID="c7cfa4dec96dbca2fe125b83f44d5acd8c41f552ae5f721e4aca31bd53b0ff70" exitCode=1 Mar 18 09:00:11.124633 master-0 kubenswrapper[6976]: I0318 09:00:11.124527 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" event={"ID":"411d544f-e105-44f0-927a-f61406b3f070","Type":"ContainerDied","Data":"c7cfa4dec96dbca2fe125b83f44d5acd8c41f552ae5f721e4aca31bd53b0ff70"} Mar 18 09:00:11.124803 master-0 kubenswrapper[6976]: I0318 09:00:11.124670 6976 scope.go:117] "RemoveContainer" containerID="177f16090fa41cba4e3892f17219367dee40fa3695daf9c589750f25c0f6d328" Mar 18 09:00:11.125681 master-0 kubenswrapper[6976]: I0318 09:00:11.125614 6976 scope.go:117] "RemoveContainer" containerID="c7cfa4dec96dbca2fe125b83f44d5acd8c41f552ae5f721e4aca31bd53b0ff70" Mar 18 09:00:11.126117 master-0 kubenswrapper[6976]: E0318 09:00:11.126024 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-6864dc98f7-vbxdw_openshift-catalogd(411d544f-e105-44f0-927a-f61406b3f070)\"" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" podUID="411d544f-e105-44f0-927a-f61406b3f070" Mar 18 09:00:11.128798 master-0 kubenswrapper[6976]: I0318 09:00:11.128762 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vwqc4_94e2a8f0-2c2e-43da-9fa9-69edfcd77830/config-sync-controllers/0.log" Mar 18 09:00:11.129454 master-0 kubenswrapper[6976]: I0318 09:00:11.129406 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vwqc4_94e2a8f0-2c2e-43da-9fa9-69edfcd77830/cluster-cloud-controller-manager/0.log" Mar 18 09:00:11.129543 master-0 kubenswrapper[6976]: I0318 09:00:11.129480 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" event={"ID":"94e2a8f0-2c2e-43da-9fa9-69edfcd77830","Type":"ContainerStarted","Data":"bcbe5fe66019d4fd5dfb3293f95470c39c40bca7d39ca79fac88e549747f7cba"} Mar 18 09:00:11.281175 master-0 kubenswrapper[6976]: I0318 09:00:11.281089 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:11.281175 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:11.281175 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:11.281175 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:11.281843 master-0 kubenswrapper[6976]: I0318 09:00:11.281185 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:12.140343 master-0 kubenswrapper[6976]: I0318 09:00:12.140270 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-vbxdw_411d544f-e105-44f0-927a-f61406b3f070/manager/1.log" Mar 18 09:00:12.279708 master-0 kubenswrapper[6976]: I0318 09:00:12.279647 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:12.279708 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:12.279708 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:12.279708 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:12.279914 master-0 kubenswrapper[6976]: I0318 09:00:12.279734 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:13.157838 master-0 kubenswrapper[6976]: I0318 09:00:13.157752 6976 generic.go:334] "Generic (PLEG): container finished" podID="ca9d4694-8675-47c5-819f-89bba9dcdc0f" containerID="c88fcd910d6e8db24ed27b15176e93cabbfee77fff73e20a53806a79c06e2fd5" exitCode=0 Mar 18 09:00:13.157838 master-0 kubenswrapper[6976]: I0318 09:00:13.157826 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" event={"ID":"ca9d4694-8675-47c5-819f-89bba9dcdc0f","Type":"ContainerDied","Data":"c88fcd910d6e8db24ed27b15176e93cabbfee77fff73e20a53806a79c06e2fd5"} Mar 18 09:00:13.158761 master-0 kubenswrapper[6976]: I0318 09:00:13.158546 6976 scope.go:117] "RemoveContainer" containerID="c88fcd910d6e8db24ed27b15176e93cabbfee77fff73e20a53806a79c06e2fd5" Mar 18 09:00:13.280519 master-0 kubenswrapper[6976]: I0318 09:00:13.280424 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:13.280519 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:13.280519 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:13.280519 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:13.280519 master-0 kubenswrapper[6976]: I0318 09:00:13.280505 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:14.180370 master-0 kubenswrapper[6976]: I0318 09:00:14.180284 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" event={"ID":"ca9d4694-8675-47c5-819f-89bba9dcdc0f","Type":"ContainerStarted","Data":"cef1a45ff46357dfc0409e6120e9e0c78cb19d5dd262f81d2cc56e810d6f6651"} Mar 18 09:00:14.181804 master-0 kubenswrapper[6976]: I0318 09:00:14.181741 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 09:00:14.185644 master-0 kubenswrapper[6976]: I0318 09:00:14.185537 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 09:00:14.281128 master-0 kubenswrapper[6976]: I0318 09:00:14.281039 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:14.281128 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:14.281128 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:14.281128 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:14.281755 master-0 kubenswrapper[6976]: I0318 09:00:14.281166 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:14.706153 master-0 kubenswrapper[6976]: I0318 09:00:14.706101 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 09:00:14.707318 master-0 kubenswrapper[6976]: I0318 09:00:14.707289 6976 scope.go:117] "RemoveContainer" containerID="9fa57acf7d89fed72b41cf833947aeeae5bc2aa09219f68d237536250d7030f8" Mar 18 09:00:14.707902 master-0 kubenswrapper[6976]: E0318 09:00:14.707867 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-57777556ff-xfqsm_openshift-operator-controller(800297fe-77fd-4f58-ade2-32a147cd7d5c)\"" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" podUID="800297fe-77fd-4f58-ade2-32a147cd7d5c" Mar 18 09:00:15.280671 master-0 kubenswrapper[6976]: I0318 09:00:15.280619 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:15.280671 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:15.280671 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:15.280671 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:15.281725 master-0 kubenswrapper[6976]: I0318 09:00:15.281680 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:16.122927 master-0 kubenswrapper[6976]: I0318 09:00:16.122844 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 09:00:16.123813 master-0 kubenswrapper[6976]: I0318 09:00:16.123773 6976 scope.go:117] "RemoveContainer" containerID="c7cfa4dec96dbca2fe125b83f44d5acd8c41f552ae5f721e4aca31bd53b0ff70" Mar 18 09:00:16.124192 master-0 kubenswrapper[6976]: E0318 09:00:16.124126 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-6864dc98f7-vbxdw_openshift-catalogd(411d544f-e105-44f0-927a-f61406b3f070)\"" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" podUID="411d544f-e105-44f0-927a-f61406b3f070" Mar 18 09:00:16.280288 master-0 kubenswrapper[6976]: I0318 09:00:16.280217 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:16.280288 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:16.280288 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:16.280288 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:16.280581 master-0 kubenswrapper[6976]: I0318 09:00:16.280317 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:16.599167 master-0 kubenswrapper[6976]: I0318 09:00:16.599087 6976 scope.go:117] "RemoveContainer" containerID="846d9dc4a6c1b4a6bf039195850d60f812737e3d5e44c652f1e1634888edfe9d" Mar 18 09:00:16.784747 master-0 kubenswrapper[6976]: I0318 09:00:16.784640 6976 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 18 09:00:16.785044 master-0 kubenswrapper[6976]: I0318 09:00:16.784770 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:00:16.785787 master-0 kubenswrapper[6976]: I0318 09:00:16.785729 6976 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"3251669fc25c1285249f7f12096305de8608ba905c5c140d1ecff87834122e35"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 18 09:00:16.785910 master-0 kubenswrapper[6976]: I0318 09:00:16.785837 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" containerID="cri-o://3251669fc25c1285249f7f12096305de8608ba905c5c140d1ecff87834122e35" gracePeriod=30 Mar 18 09:00:16.909422 master-0 kubenswrapper[6976]: E0318 09:00:16.909340 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 09:00:17.208534 master-0 kubenswrapper[6976]: I0318 09:00:17.208434 6976 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="3251669fc25c1285249f7f12096305de8608ba905c5c140d1ecff87834122e35" exitCode=2 Mar 18 09:00:17.208534 master-0 kubenswrapper[6976]: I0318 09:00:17.208490 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"3251669fc25c1285249f7f12096305de8608ba905c5c140d1ecff87834122e35"} Mar 18 09:00:17.208919 master-0 kubenswrapper[6976]: I0318 09:00:17.208596 6976 scope.go:117] "RemoveContainer" containerID="27d587c7891abbfb93354b414b8f680dfa9657b70ef3b27da5fccf707326fa1a" Mar 18 09:00:17.209380 master-0 kubenswrapper[6976]: I0318 09:00:17.209251 6976 scope.go:117] "RemoveContainer" containerID="3251669fc25c1285249f7f12096305de8608ba905c5c140d1ecff87834122e35" Mar 18 09:00:17.209831 master-0 kubenswrapper[6976]: E0318 09:00:17.209762 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 09:00:17.211804 master-0 kubenswrapper[6976]: I0318 09:00:17.211749 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qnc62_4e919445-81d0-4663-8941-f596d8121305/snapshot-controller/1.log" Mar 18 09:00:17.211914 master-0 kubenswrapper[6976]: I0318 09:00:17.211841 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" event={"ID":"4e919445-81d0-4663-8941-f596d8121305","Type":"ContainerStarted","Data":"f6285da562144cf437a330aa3cbd3762a2abfd67f37bb901285417c5c38e8ab8"} Mar 18 09:00:17.281253 master-0 kubenswrapper[6976]: I0318 09:00:17.281127 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:17.281253 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:17.281253 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:17.281253 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:17.281898 master-0 kubenswrapper[6976]: I0318 09:00:17.281252 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:18.280478 master-0 kubenswrapper[6976]: I0318 09:00:18.280413 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:18.280478 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:18.280478 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:18.280478 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:18.281428 master-0 kubenswrapper[6976]: I0318 09:00:18.280482 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:19.280610 master-0 kubenswrapper[6976]: I0318 09:00:19.280443 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:19.280610 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:19.280610 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:19.280610 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:19.281841 master-0 kubenswrapper[6976]: I0318 09:00:19.280563 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:20.280801 master-0 kubenswrapper[6976]: I0318 09:00:20.280685 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:20.280801 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:20.280801 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:20.280801 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:20.281820 master-0 kubenswrapper[6976]: I0318 09:00:20.280809 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:21.280681 master-0 kubenswrapper[6976]: I0318 09:00:21.280600 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:21.280681 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:21.280681 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:21.280681 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:21.281379 master-0 kubenswrapper[6976]: I0318 09:00:21.280697 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:22.279973 master-0 kubenswrapper[6976]: I0318 09:00:22.279912 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:22.279973 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:22.279973 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:22.279973 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:22.280231 master-0 kubenswrapper[6976]: I0318 09:00:22.279989 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:22.425617 master-0 kubenswrapper[6976]: I0318 09:00:22.425499 6976 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:00:22.426540 master-0 kubenswrapper[6976]: I0318 09:00:22.426493 6976 scope.go:117] "RemoveContainer" containerID="3251669fc25c1285249f7f12096305de8608ba905c5c140d1ecff87834122e35" Mar 18 09:00:22.426990 master-0 kubenswrapper[6976]: E0318 09:00:22.426933 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 09:00:23.281086 master-0 kubenswrapper[6976]: I0318 09:00:23.280996 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:23.281086 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:23.281086 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:23.281086 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:23.281675 master-0 kubenswrapper[6976]: I0318 09:00:23.281106 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:24.177993 master-0 kubenswrapper[6976]: E0318 09:00:24.177810 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Mar 18 09:00:24.280335 master-0 kubenswrapper[6976]: I0318 09:00:24.280262 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:24.280335 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:24.280335 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:24.280335 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:24.280924 master-0 kubenswrapper[6976]: I0318 09:00:24.280357 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:24.706657 master-0 kubenswrapper[6976]: I0318 09:00:24.706532 6976 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 09:00:24.707477 master-0 kubenswrapper[6976]: I0318 09:00:24.707434 6976 scope.go:117] "RemoveContainer" containerID="9fa57acf7d89fed72b41cf833947aeeae5bc2aa09219f68d237536250d7030f8" Mar 18 09:00:24.966680 master-0 kubenswrapper[6976]: E0318 09:00:24.966427 6976 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189de3ceb357a97a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:BackOff,Message:Back-off restarting failed container kube-scheduler in pod bootstrap-kube-scheduler-master-0_kube-system(c83737980b9ee109184b1d78e942cf36),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:58:25.165478266 +0000 UTC m=+604.751079901,LastTimestamp:2026-03-18 08:58:25.165478266 +0000 UTC m=+604.751079901,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:00:25.280215 master-0 kubenswrapper[6976]: I0318 09:00:25.279964 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:25.280215 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:25.280215 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:25.280215 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:25.280215 master-0 kubenswrapper[6976]: I0318 09:00:25.280046 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:25.283473 master-0 kubenswrapper[6976]: I0318 09:00:25.283340 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-xfqsm_800297fe-77fd-4f58-ade2-32a147cd7d5c/manager/1.log" Mar 18 09:00:25.284689 master-0 kubenswrapper[6976]: I0318 09:00:25.284208 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" event={"ID":"800297fe-77fd-4f58-ade2-32a147cd7d5c","Type":"ContainerStarted","Data":"4c5d676d86cfd58175b111ac97105fe868ec0090ef0fc664ff29a0532c6f422a"} Mar 18 09:00:25.284689 master-0 kubenswrapper[6976]: I0318 09:00:25.284541 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 09:00:26.123028 master-0 kubenswrapper[6976]: I0318 09:00:26.122925 6976 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 09:00:26.123718 master-0 kubenswrapper[6976]: I0318 09:00:26.123677 6976 scope.go:117] "RemoveContainer" containerID="c7cfa4dec96dbca2fe125b83f44d5acd8c41f552ae5f721e4aca31bd53b0ff70" Mar 18 09:00:26.280421 master-0 kubenswrapper[6976]: I0318 09:00:26.280309 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:26.280421 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:26.280421 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:26.280421 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:26.280421 master-0 kubenswrapper[6976]: I0318 09:00:26.280411 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:27.280401 master-0 kubenswrapper[6976]: I0318 09:00:27.280317 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:27.280401 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:27.280401 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:27.280401 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:27.281489 master-0 kubenswrapper[6976]: I0318 09:00:27.280416 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:27.318346 master-0 kubenswrapper[6976]: I0318 09:00:27.318258 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-vbxdw_411d544f-e105-44f0-927a-f61406b3f070/manager/1.log" Mar 18 09:00:27.319043 master-0 kubenswrapper[6976]: I0318 09:00:27.318964 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" event={"ID":"411d544f-e105-44f0-927a-f61406b3f070","Type":"ContainerStarted","Data":"1fb6b85640608194046242fa0601566be26038e3d497e11d1cbc84892e86c4c2"} Mar 18 09:00:27.319656 master-0 kubenswrapper[6976]: I0318 09:00:27.319597 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 09:00:28.279903 master-0 kubenswrapper[6976]: I0318 09:00:28.279835 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:28.279903 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:28.279903 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:28.279903 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:28.279903 master-0 kubenswrapper[6976]: I0318 09:00:28.279902 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:29.280538 master-0 kubenswrapper[6976]: I0318 09:00:29.280431 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:29.280538 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:29.280538 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:29.280538 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:29.280538 master-0 kubenswrapper[6976]: I0318 09:00:29.280534 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:30.280959 master-0 kubenswrapper[6976]: I0318 09:00:30.280865 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:30.280959 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:30.280959 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:30.280959 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:30.280959 master-0 kubenswrapper[6976]: I0318 09:00:30.280946 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:31.280383 master-0 kubenswrapper[6976]: I0318 09:00:31.280262 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:31.280383 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:31.280383 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:31.280383 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:31.280918 master-0 kubenswrapper[6976]: I0318 09:00:31.280372 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:32.280029 master-0 kubenswrapper[6976]: I0318 09:00:32.279950 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:32.280029 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:32.280029 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:32.280029 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:32.280802 master-0 kubenswrapper[6976]: I0318 09:00:32.280036 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:33.279805 master-0 kubenswrapper[6976]: I0318 09:00:33.279706 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:33.279805 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:33.279805 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:33.279805 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:33.279805 master-0 kubenswrapper[6976]: I0318 09:00:33.279773 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:34.280122 master-0 kubenswrapper[6976]: I0318 09:00:34.280029 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:34.280122 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:34.280122 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:34.280122 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:34.281023 master-0 kubenswrapper[6976]: I0318 09:00:34.280127 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:34.709977 master-0 kubenswrapper[6976]: I0318 09:00:34.709897 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 09:00:35.019007 master-0 kubenswrapper[6976]: E0318 09:00:35.018803 6976 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 09:00:35.281626 master-0 kubenswrapper[6976]: I0318 09:00:35.281511 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:35.281626 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:35.281626 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:35.281626 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:35.282610 master-0 kubenswrapper[6976]: I0318 09:00:35.281658 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:35.392881 master-0 kubenswrapper[6976]: I0318 09:00:35.392703 6976 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="9cb189c47185ee7666cdc7e6aa936134fd95f8598c903e678c39284b0494bcba" exitCode=0 Mar 18 09:00:35.392881 master-0 kubenswrapper[6976]: I0318 09:00:35.392784 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"9cb189c47185ee7666cdc7e6aa936134fd95f8598c903e678c39284b0494bcba"} Mar 18 09:00:35.393246 master-0 kubenswrapper[6976]: I0318 09:00:35.393197 6976 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="a9391117-4261-4eba-b3f6-0d77562a8375" Mar 18 09:00:35.393246 master-0 kubenswrapper[6976]: I0318 09:00:35.393224 6976 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="a9391117-4261-4eba-b3f6-0d77562a8375" Mar 18 09:00:36.125189 master-0 kubenswrapper[6976]: I0318 09:00:36.125127 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 09:00:36.280478 master-0 kubenswrapper[6976]: I0318 09:00:36.280375 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:36.280478 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:36.280478 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:36.280478 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:36.281003 master-0 kubenswrapper[6976]: I0318 09:00:36.280494 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:37.279985 master-0 kubenswrapper[6976]: I0318 09:00:37.279904 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:37.279985 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:37.279985 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:37.279985 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:37.280884 master-0 kubenswrapper[6976]: I0318 09:00:37.280016 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:37.598265 master-0 kubenswrapper[6976]: I0318 09:00:37.598161 6976 scope.go:117] "RemoveContainer" containerID="3251669fc25c1285249f7f12096305de8608ba905c5c140d1ecff87834122e35" Mar 18 09:00:37.598694 master-0 kubenswrapper[6976]: E0318 09:00:37.598635 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 09:00:38.280959 master-0 kubenswrapper[6976]: I0318 09:00:38.280208 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:38.280959 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:38.280959 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:38.280959 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:38.280959 master-0 kubenswrapper[6976]: I0318 09:00:38.280317 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:38.418498 master-0 kubenswrapper[6976]: I0318 09:00:38.418416 6976 generic.go:334] "Generic (PLEG): container finished" podID="7cac1300-44c1-4a7d-8d14-efa9702ad9df" containerID="9e7634be3a4cb755dbc0dd2889d5ffa704ff67f015983aeee93833b324c107db" exitCode=0 Mar 18 09:00:38.418498 master-0 kubenswrapper[6976]: I0318 09:00:38.418473 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" event={"ID":"7cac1300-44c1-4a7d-8d14-efa9702ad9df","Type":"ContainerDied","Data":"9e7634be3a4cb755dbc0dd2889d5ffa704ff67f015983aeee93833b324c107db"} Mar 18 09:00:38.418942 master-0 kubenswrapper[6976]: I0318 09:00:38.418547 6976 scope.go:117] "RemoveContainer" containerID="fdb4bcca892ef3b8b38b6412f754f472839917394e632bf7ec218fe086926be2" Mar 18 09:00:38.419654 master-0 kubenswrapper[6976]: I0318 09:00:38.419494 6976 scope.go:117] "RemoveContainer" containerID="9e7634be3a4cb755dbc0dd2889d5ffa704ff67f015983aeee93833b324c107db" Mar 18 09:00:38.420155 master-0 kubenswrapper[6976]: E0318 09:00:38.419911 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-cluster-manager pod=ovnkube-control-plane-57f769d897-j2fgr_openshift-ovn-kubernetes(7cac1300-44c1-4a7d-8d14-efa9702ad9df)\"" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" podUID="7cac1300-44c1-4a7d-8d14-efa9702ad9df" Mar 18 09:00:39.280405 master-0 kubenswrapper[6976]: I0318 09:00:39.280254 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:39.280405 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:39.280405 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:39.280405 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:39.280405 master-0 kubenswrapper[6976]: I0318 09:00:39.280376 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:40.280032 master-0 kubenswrapper[6976]: I0318 09:00:40.279942 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:40.280032 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:40.280032 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:40.280032 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:40.280688 master-0 kubenswrapper[6976]: I0318 09:00:40.280040 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:41.179548 master-0 kubenswrapper[6976]: E0318 09:00:41.179410 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 09:00:41.281070 master-0 kubenswrapper[6976]: I0318 09:00:41.280941 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:41.281070 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:41.281070 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:41.281070 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:41.281070 master-0 kubenswrapper[6976]: I0318 09:00:41.281060 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:42.279721 master-0 kubenswrapper[6976]: I0318 09:00:42.279639 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:42.279721 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:42.279721 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:42.279721 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:42.280165 master-0 kubenswrapper[6976]: I0318 09:00:42.279716 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:43.281727 master-0 kubenswrapper[6976]: I0318 09:00:43.281669 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:43.281727 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:43.281727 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:43.281727 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:43.282806 master-0 kubenswrapper[6976]: I0318 09:00:43.282747 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:44.280688 master-0 kubenswrapper[6976]: I0318 09:00:44.280522 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:44.280688 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:44.280688 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:44.280688 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:44.281146 master-0 kubenswrapper[6976]: I0318 09:00:44.280691 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:44.484594 master-0 kubenswrapper[6976]: I0318 09:00:44.484506 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mcd6d_eb8f3615-9e89-4b51-87a2-7d168c81adf3/cluster-baremetal-operator/0.log" Mar 18 09:00:44.484594 master-0 kubenswrapper[6976]: I0318 09:00:44.484588 6976 generic.go:334] "Generic (PLEG): container finished" podID="eb8f3615-9e89-4b51-87a2-7d168c81adf3" containerID="0caedbadbfcaeb7785b9d06130fc6e0d2a7ecb9753168035bbf898c397b762cf" exitCode=1 Mar 18 09:00:44.485545 master-0 kubenswrapper[6976]: I0318 09:00:44.484626 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" event={"ID":"eb8f3615-9e89-4b51-87a2-7d168c81adf3","Type":"ContainerDied","Data":"0caedbadbfcaeb7785b9d06130fc6e0d2a7ecb9753168035bbf898c397b762cf"} Mar 18 09:00:44.485545 master-0 kubenswrapper[6976]: I0318 09:00:44.485168 6976 scope.go:117] "RemoveContainer" containerID="0caedbadbfcaeb7785b9d06130fc6e0d2a7ecb9753168035bbf898c397b762cf" Mar 18 09:00:45.280280 master-0 kubenswrapper[6976]: I0318 09:00:45.280156 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:45.280280 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:45.280280 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:45.280280 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:45.280280 master-0 kubenswrapper[6976]: I0318 09:00:45.280253 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:45.496047 master-0 kubenswrapper[6976]: I0318 09:00:45.495946 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-s98kp_25781967-12ce-490e-94aa-9b9722f495da/control-plane-machine-set-operator/0.log" Mar 18 09:00:45.496047 master-0 kubenswrapper[6976]: I0318 09:00:45.496031 6976 generic.go:334] "Generic (PLEG): container finished" podID="25781967-12ce-490e-94aa-9b9722f495da" containerID="49a79a26d80521d4a77ceb38753751818ca40b01df46c62b4c6e6cd03feb2aa4" exitCode=1 Mar 18 09:00:45.497313 master-0 kubenswrapper[6976]: I0318 09:00:45.496125 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-s98kp" event={"ID":"25781967-12ce-490e-94aa-9b9722f495da","Type":"ContainerDied","Data":"49a79a26d80521d4a77ceb38753751818ca40b01df46c62b4c6e6cd03feb2aa4"} Mar 18 09:00:45.497313 master-0 kubenswrapper[6976]: I0318 09:00:45.496783 6976 scope.go:117] "RemoveContainer" containerID="49a79a26d80521d4a77ceb38753751818ca40b01df46c62b4c6e6cd03feb2aa4" Mar 18 09:00:45.501742 master-0 kubenswrapper[6976]: I0318 09:00:45.501688 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mcd6d_eb8f3615-9e89-4b51-87a2-7d168c81adf3/cluster-baremetal-operator/0.log" Mar 18 09:00:45.501900 master-0 kubenswrapper[6976]: I0318 09:00:45.501806 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" event={"ID":"eb8f3615-9e89-4b51-87a2-7d168c81adf3","Type":"ContainerStarted","Data":"2acf0cea8b1392ffa9520a8d120668aa5dceff5734023e4ff18420eb0b6a71d5"} Mar 18 09:00:46.281125 master-0 kubenswrapper[6976]: I0318 09:00:46.280999 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:46.281125 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:46.281125 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:46.281125 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:46.281125 master-0 kubenswrapper[6976]: I0318 09:00:46.281115 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:46.512428 master-0 kubenswrapper[6976]: I0318 09:00:46.512359 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-s98kp_25781967-12ce-490e-94aa-9b9722f495da/control-plane-machine-set-operator/0.log" Mar 18 09:00:46.513185 master-0 kubenswrapper[6976]: I0318 09:00:46.512494 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-s98kp" event={"ID":"25781967-12ce-490e-94aa-9b9722f495da","Type":"ContainerStarted","Data":"525b2fb97a72e6503c7be5a5b231c1af20b83b34615d8371de80cb41191f2afc"} Mar 18 09:00:46.516010 master-0 kubenswrapper[6976]: I0318 09:00:46.515965 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-r4mv6_cdcd27a4-6d46-47af-a14a-65f6501c10f0/machine-approver-controller/0.log" Mar 18 09:00:46.516441 master-0 kubenswrapper[6976]: I0318 09:00:46.516385 6976 generic.go:334] "Generic (PLEG): container finished" podID="cdcd27a4-6d46-47af-a14a-65f6501c10f0" containerID="ca74e483ee5f7795ddd4a19b8dedb0099339c33aeba4c489fb33f3fdb2d038a6" exitCode=255 Mar 18 09:00:46.516509 master-0 kubenswrapper[6976]: I0318 09:00:46.516445 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" event={"ID":"cdcd27a4-6d46-47af-a14a-65f6501c10f0","Type":"ContainerDied","Data":"ca74e483ee5f7795ddd4a19b8dedb0099339c33aeba4c489fb33f3fdb2d038a6"} Mar 18 09:00:46.517084 master-0 kubenswrapper[6976]: I0318 09:00:46.517035 6976 scope.go:117] "RemoveContainer" containerID="ca74e483ee5f7795ddd4a19b8dedb0099339c33aeba4c489fb33f3fdb2d038a6" Mar 18 09:00:47.280511 master-0 kubenswrapper[6976]: I0318 09:00:47.280412 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:47.280511 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:47.280511 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:47.280511 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:47.280970 master-0 kubenswrapper[6976]: I0318 09:00:47.280514 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:47.528017 master-0 kubenswrapper[6976]: I0318 09:00:47.527936 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qnc62_4e919445-81d0-4663-8941-f596d8121305/snapshot-controller/2.log" Mar 18 09:00:47.529129 master-0 kubenswrapper[6976]: I0318 09:00:47.528697 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qnc62_4e919445-81d0-4663-8941-f596d8121305/snapshot-controller/1.log" Mar 18 09:00:47.529129 master-0 kubenswrapper[6976]: I0318 09:00:47.528760 6976 generic.go:334] "Generic (PLEG): container finished" podID="4e919445-81d0-4663-8941-f596d8121305" containerID="f6285da562144cf437a330aa3cbd3762a2abfd67f37bb901285417c5c38e8ab8" exitCode=1 Mar 18 09:00:47.529129 master-0 kubenswrapper[6976]: I0318 09:00:47.528850 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" event={"ID":"4e919445-81d0-4663-8941-f596d8121305","Type":"ContainerDied","Data":"f6285da562144cf437a330aa3cbd3762a2abfd67f37bb901285417c5c38e8ab8"} Mar 18 09:00:47.529129 master-0 kubenswrapper[6976]: I0318 09:00:47.528900 6976 scope.go:117] "RemoveContainer" containerID="846d9dc4a6c1b4a6bf039195850d60f812737e3d5e44c652f1e1634888edfe9d" Mar 18 09:00:47.530018 master-0 kubenswrapper[6976]: I0318 09:00:47.529674 6976 scope.go:117] "RemoveContainer" containerID="f6285da562144cf437a330aa3cbd3762a2abfd67f37bb901285417c5c38e8ab8" Mar 18 09:00:47.530113 master-0 kubenswrapper[6976]: E0318 09:00:47.530017 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-qnc62_openshift-cluster-storage-operator(4e919445-81d0-4663-8941-f596d8121305)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" podUID="4e919445-81d0-4663-8941-f596d8121305" Mar 18 09:00:47.532303 master-0 kubenswrapper[6976]: I0318 09:00:47.532206 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-r4mv6_cdcd27a4-6d46-47af-a14a-65f6501c10f0/machine-approver-controller/0.log" Mar 18 09:00:47.533188 master-0 kubenswrapper[6976]: I0318 09:00:47.533115 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" event={"ID":"cdcd27a4-6d46-47af-a14a-65f6501c10f0","Type":"ContainerStarted","Data":"3b7db1b8a233abb3299866fb84b9d4a9323e807830b3893e83d635d7ffe8eb30"} Mar 18 09:00:48.280411 master-0 kubenswrapper[6976]: I0318 09:00:48.280330 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:48.280411 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:48.280411 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:48.280411 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:48.280896 master-0 kubenswrapper[6976]: I0318 09:00:48.280414 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:48.551919 master-0 kubenswrapper[6976]: I0318 09:00:48.551741 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qnc62_4e919445-81d0-4663-8941-f596d8121305/snapshot-controller/2.log" Mar 18 09:00:48.599377 master-0 kubenswrapper[6976]: I0318 09:00:48.599187 6976 scope.go:117] "RemoveContainer" containerID="3251669fc25c1285249f7f12096305de8608ba905c5c140d1ecff87834122e35" Mar 18 09:00:48.599756 master-0 kubenswrapper[6976]: E0318 09:00:48.599668 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 09:00:49.280416 master-0 kubenswrapper[6976]: I0318 09:00:49.280337 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:49.280416 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:49.280416 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:49.280416 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:49.280416 master-0 kubenswrapper[6976]: I0318 09:00:49.280412 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:50.281480 master-0 kubenswrapper[6976]: I0318 09:00:50.281374 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:50.281480 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:50.281480 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:50.281480 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:50.281480 master-0 kubenswrapper[6976]: I0318 09:00:50.281473 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:50.572407 master-0 kubenswrapper[6976]: I0318 09:00:50.572189 6976 generic.go:334] "Generic (PLEG): container finished" podID="6e869b45-8ca6-485f-8b6f-b2fad3b02efe" containerID="c6f50cc1d4d03038974b1dda9ac09a73d320c83ee3b7473d607c5ae6d14d9d74" exitCode=0 Mar 18 09:00:50.572407 master-0 kubenswrapper[6976]: I0318 09:00:50.572265 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" event={"ID":"6e869b45-8ca6-485f-8b6f-b2fad3b02efe","Type":"ContainerDied","Data":"c6f50cc1d4d03038974b1dda9ac09a73d320c83ee3b7473d607c5ae6d14d9d74"} Mar 18 09:00:50.573341 master-0 kubenswrapper[6976]: I0318 09:00:50.573294 6976 scope.go:117] "RemoveContainer" containerID="c6f50cc1d4d03038974b1dda9ac09a73d320c83ee3b7473d607c5ae6d14d9d74" Mar 18 09:00:50.600639 master-0 kubenswrapper[6976]: I0318 09:00:50.599651 6976 scope.go:117] "RemoveContainer" containerID="9e7634be3a4cb755dbc0dd2889d5ffa704ff67f015983aeee93833b324c107db" Mar 18 09:00:51.280944 master-0 kubenswrapper[6976]: I0318 09:00:51.280872 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:51.280944 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:51.280944 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:51.280944 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:51.281402 master-0 kubenswrapper[6976]: I0318 09:00:51.280965 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:51.583862 master-0 kubenswrapper[6976]: I0318 09:00:51.583655 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" event={"ID":"7cac1300-44c1-4a7d-8d14-efa9702ad9df","Type":"ContainerStarted","Data":"e0d7b753d7b5cb543b8843197227d8571e32ffa7eb8e783761eda50964092160"} Mar 18 09:00:51.587102 master-0 kubenswrapper[6976]: I0318 09:00:51.587038 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" event={"ID":"6e869b45-8ca6-485f-8b6f-b2fad3b02efe","Type":"ContainerStarted","Data":"e32072042363277b3f0d68f8ae8924dc1c01bef38c146caf77cf061b8b73cd26"} Mar 18 09:00:51.587537 master-0 kubenswrapper[6976]: I0318 09:00:51.587476 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 09:00:51.593782 master-0 kubenswrapper[6976]: I0318 09:00:51.593707 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 09:00:52.280327 master-0 kubenswrapper[6976]: I0318 09:00:52.280262 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:52.280327 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:52.280327 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:52.280327 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:52.280663 master-0 kubenswrapper[6976]: I0318 09:00:52.280341 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:53.280227 master-0 kubenswrapper[6976]: I0318 09:00:53.280094 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:53.280227 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:53.280227 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:53.280227 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:53.281276 master-0 kubenswrapper[6976]: I0318 09:00:53.280231 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:54.280521 master-0 kubenswrapper[6976]: I0318 09:00:54.280225 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:54.280521 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:54.280521 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:54.280521 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:54.280521 master-0 kubenswrapper[6976]: I0318 09:00:54.280341 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:55.280888 master-0 kubenswrapper[6976]: I0318 09:00:55.280786 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:55.280888 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:55.280888 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:55.280888 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:55.281632 master-0 kubenswrapper[6976]: I0318 09:00:55.280879 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:56.280482 master-0 kubenswrapper[6976]: I0318 09:00:56.280314 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:56.280482 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:56.280482 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:56.280482 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:56.280482 master-0 kubenswrapper[6976]: I0318 09:00:56.280401 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:57.280836 master-0 kubenswrapper[6976]: I0318 09:00:57.280762 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:57.280836 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:57.280836 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:57.280836 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:57.281668 master-0 kubenswrapper[6976]: I0318 09:00:57.280853 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:57.599079 master-0 kubenswrapper[6976]: I0318 09:00:57.598935 6976 scope.go:117] "RemoveContainer" containerID="f6285da562144cf437a330aa3cbd3762a2abfd67f37bb901285417c5c38e8ab8" Mar 18 09:00:57.599490 master-0 kubenswrapper[6976]: E0318 09:00:57.599434 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-qnc62_openshift-cluster-storage-operator(4e919445-81d0-4663-8941-f596d8121305)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" podUID="4e919445-81d0-4663-8941-f596d8121305" Mar 18 09:00:58.180631 master-0 kubenswrapper[6976]: E0318 09:00:58.180501 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 09:00:58.280116 master-0 kubenswrapper[6976]: I0318 09:00:58.279998 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:58.280116 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:58.280116 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:58.280116 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:58.280116 master-0 kubenswrapper[6976]: I0318 09:00:58.280108 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:00:58.971164 master-0 kubenswrapper[6976]: E0318 09:00:58.970966 6976 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de3ce772f8d22 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:58:24.15621661 +0000 UTC m=+603.741818235,LastTimestamp:2026-03-18 08:58:32.426308905 +0000 UTC m=+612.011910540,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:00:59.280833 master-0 kubenswrapper[6976]: I0318 09:00:59.280643 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:00:59.280833 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:00:59.280833 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:00:59.280833 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:00:59.280833 master-0 kubenswrapper[6976]: I0318 09:00:59.280726 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:00.280703 master-0 kubenswrapper[6976]: I0318 09:01:00.280542 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:00.280703 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:01:00.280703 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:01:00.280703 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:01:00.280703 master-0 kubenswrapper[6976]: I0318 09:01:00.280667 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:01.281256 master-0 kubenswrapper[6976]: I0318 09:01:01.281164 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:01.281256 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:01:01.281256 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:01:01.281256 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:01:01.282290 master-0 kubenswrapper[6976]: I0318 09:01:01.281316 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:01.942237 master-0 kubenswrapper[6976]: E0318 09:01:01.942123 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T09:00:51Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T09:00:51Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T09:00:51Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T09:00:51Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:01:02.280551 master-0 kubenswrapper[6976]: I0318 09:01:02.280479 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:02.280551 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:01:02.280551 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:01:02.280551 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:01:02.281129 master-0 kubenswrapper[6976]: I0318 09:01:02.280598 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:03.280442 master-0 kubenswrapper[6976]: I0318 09:01:03.280385 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:03.280442 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:01:03.280442 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:01:03.280442 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:01:03.281200 master-0 kubenswrapper[6976]: I0318 09:01:03.280494 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:03.599678 master-0 kubenswrapper[6976]: I0318 09:01:03.599471 6976 scope.go:117] "RemoveContainer" containerID="3251669fc25c1285249f7f12096305de8608ba905c5c140d1ecff87834122e35" Mar 18 09:01:03.600061 master-0 kubenswrapper[6976]: E0318 09:01:03.599999 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 09:01:04.279856 master-0 kubenswrapper[6976]: I0318 09:01:04.279717 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:04.279856 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:01:04.279856 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:01:04.279856 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:01:04.279856 master-0 kubenswrapper[6976]: I0318 09:01:04.279832 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:05.282558 master-0 kubenswrapper[6976]: I0318 09:01:05.282451 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:05.282558 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:01:05.282558 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:01:05.282558 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:01:05.282558 master-0 kubenswrapper[6976]: I0318 09:01:05.282561 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:06.280359 master-0 kubenswrapper[6976]: I0318 09:01:06.280268 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:06.280359 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:01:06.280359 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:01:06.280359 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:01:06.281006 master-0 kubenswrapper[6976]: I0318 09:01:06.280352 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:07.280458 master-0 kubenswrapper[6976]: I0318 09:01:07.280383 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:07.280458 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:01:07.280458 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:01:07.280458 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:01:07.281021 master-0 kubenswrapper[6976]: I0318 09:01:07.280453 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:08.281296 master-0 kubenswrapper[6976]: I0318 09:01:08.281211 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:08.281296 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:01:08.281296 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:01:08.281296 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:01:08.282267 master-0 kubenswrapper[6976]: I0318 09:01:08.281315 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:09.282732 master-0 kubenswrapper[6976]: I0318 09:01:09.282627 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:09.282732 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:01:09.282732 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:01:09.282732 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:01:09.282732 master-0 kubenswrapper[6976]: I0318 09:01:09.282703 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:09.397235 master-0 kubenswrapper[6976]: E0318 09:01:09.397127 6976 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 09:01:10.279949 master-0 kubenswrapper[6976]: I0318 09:01:10.279820 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:10.279949 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:01:10.279949 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:01:10.279949 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:01:10.280476 master-0 kubenswrapper[6976]: I0318 09:01:10.279958 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:10.626105 master-0 kubenswrapper[6976]: I0318 09:01:10.626028 6976 status_manager.go:851] "Failed to get status for pod" podUID="57affd8b-d1ce-40d2-b31e-7b18645ca7b6" pod="openshift-network-node-identity/network-node-identity-lf7kq" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods network-node-identity-lf7kq)" Mar 18 09:01:10.759122 master-0 kubenswrapper[6976]: I0318 09:01:10.759059 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"568a67ef6669824859d31edfa49f03a313b1376806d5623e2b85e3955cdc8a8c"} Mar 18 09:01:10.759122 master-0 kubenswrapper[6976]: I0318 09:01:10.759114 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"508ba28f4996f4846c09ffaac0d5fd73f81397921594eed543f49f2663c92153"} Mar 18 09:01:10.759122 master-0 kubenswrapper[6976]: I0318 09:01:10.759127 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"78adc9fceec0398f87741046798ef37a06ff88e851d3911c97f4d19ca0250270"} Mar 18 09:01:10.759422 master-0 kubenswrapper[6976]: I0318 09:01:10.759139 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"1f7b3a7ed16a4b262bbae39dc4d7a6a48993213e9a09aa0191819566831513ec"} Mar 18 09:01:11.280944 master-0 kubenswrapper[6976]: I0318 09:01:11.280866 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:11.280944 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:01:11.280944 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:01:11.280944 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:01:11.281391 master-0 kubenswrapper[6976]: I0318 09:01:11.280972 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:11.598467 master-0 kubenswrapper[6976]: I0318 09:01:11.598303 6976 scope.go:117] "RemoveContainer" containerID="f6285da562144cf437a330aa3cbd3762a2abfd67f37bb901285417c5c38e8ab8" Mar 18 09:01:11.774345 master-0 kubenswrapper[6976]: I0318 09:01:11.774286 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"3ca8adab6e36fc6073de1c6ddada1eb6d6c8531a7b3f49bf5696edf52269053b"} Mar 18 09:01:11.775512 master-0 kubenswrapper[6976]: I0318 09:01:11.774795 6976 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="a9391117-4261-4eba-b3f6-0d77562a8375" Mar 18 09:01:11.775512 master-0 kubenswrapper[6976]: I0318 09:01:11.774842 6976 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="a9391117-4261-4eba-b3f6-0d77562a8375" Mar 18 09:01:11.943598 master-0 kubenswrapper[6976]: E0318 09:01:11.943327 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:01:12.279863 master-0 kubenswrapper[6976]: I0318 09:01:12.279774 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:12.279863 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:01:12.279863 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:01:12.279863 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:01:12.279863 master-0 kubenswrapper[6976]: I0318 09:01:12.279851 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:12.786239 master-0 kubenswrapper[6976]: I0318 09:01:12.786140 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qnc62_4e919445-81d0-4663-8941-f596d8121305/snapshot-controller/2.log" Mar 18 09:01:12.786239 master-0 kubenswrapper[6976]: I0318 09:01:12.786241 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" event={"ID":"4e919445-81d0-4663-8941-f596d8121305","Type":"ContainerStarted","Data":"8570ba4062451d636e731c51df710a874c0ddb21fcafd404781319bf89550cbc"} Mar 18 09:01:13.282625 master-0 kubenswrapper[6976]: I0318 09:01:13.280068 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:13.282625 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:01:13.282625 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:01:13.282625 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:01:13.282625 master-0 kubenswrapper[6976]: I0318 09:01:13.280196 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:14.279443 master-0 kubenswrapper[6976]: I0318 09:01:14.279361 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:01:14.279443 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:01:14.279443 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:01:14.279443 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:01:14.279443 master-0 kubenswrapper[6976]: I0318 09:01:14.279425 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:01:14.279443 master-0 kubenswrapper[6976]: I0318 09:01:14.279465 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 09:01:14.280723 master-0 kubenswrapper[6976]: I0318 09:01:14.280004 6976 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"dbc1cb6940e9efff07d651c65a18c59c674dd8bccc10c54e3755e80079c9084e"} pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" containerMessage="Container router failed startup probe, will be restarted" Mar 18 09:01:14.280723 master-0 kubenswrapper[6976]: I0318 09:01:14.280037 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" containerID="cri-o://dbc1cb6940e9efff07d651c65a18c59c674dd8bccc10c54e3755e80079c9084e" gracePeriod=3600 Mar 18 09:01:14.639409 master-0 kubenswrapper[6976]: I0318 09:01:14.639259 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 18 09:01:14.639988 master-0 kubenswrapper[6976]: I0318 09:01:14.639958 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 18 09:01:15.181812 master-0 kubenswrapper[6976]: E0318 09:01:15.181704 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 09:01:16.599300 master-0 kubenswrapper[6976]: I0318 09:01:16.599249 6976 scope.go:117] "RemoveContainer" containerID="3251669fc25c1285249f7f12096305de8608ba905c5c140d1ecff87834122e35" Mar 18 09:01:16.600696 master-0 kubenswrapper[6976]: E0318 09:01:16.600645 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 09:01:21.944224 master-0 kubenswrapper[6976]: E0318 09:01:21.944140 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:01:24.677352 master-0 kubenswrapper[6976]: I0318 09:01:24.677278 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 18 09:01:29.599205 master-0 kubenswrapper[6976]: I0318 09:01:29.599126 6976 scope.go:117] "RemoveContainer" containerID="3251669fc25c1285249f7f12096305de8608ba905c5c140d1ecff87834122e35" Mar 18 09:01:29.604553 master-0 kubenswrapper[6976]: E0318 09:01:29.599844 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 09:01:29.652519 master-0 kubenswrapper[6976]: I0318 09:01:29.652477 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 18 09:01:31.944644 master-0 kubenswrapper[6976]: E0318 09:01:31.944469 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:01:32.183562 master-0 kubenswrapper[6976]: E0318 09:01:32.183475 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 09:01:32.974049 master-0 kubenswrapper[6976]: E0318 09:01:32.973842 6976 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de3ce772f8d22 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:58:24.15621661 +0000 UTC m=+603.741818235,LastTimestamp:2026-03-18 08:58:33.566394567 +0000 UTC m=+613.151996172,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:01:40.598150 master-0 kubenswrapper[6976]: I0318 09:01:40.598075 6976 scope.go:117] "RemoveContainer" containerID="3251669fc25c1285249f7f12096305de8608ba905c5c140d1ecff87834122e35" Mar 18 09:01:40.599145 master-0 kubenswrapper[6976]: E0318 09:01:40.598300 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 09:01:41.945002 master-0 kubenswrapper[6976]: E0318 09:01:41.944873 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:01:41.945002 master-0 kubenswrapper[6976]: E0318 09:01:41.944976 6976 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 09:01:42.054336 master-0 kubenswrapper[6976]: I0318 09:01:42.054300 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qnc62_4e919445-81d0-4663-8941-f596d8121305/snapshot-controller/3.log" Mar 18 09:01:42.054963 master-0 kubenswrapper[6976]: I0318 09:01:42.054939 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qnc62_4e919445-81d0-4663-8941-f596d8121305/snapshot-controller/2.log" Mar 18 09:01:42.055080 master-0 kubenswrapper[6976]: I0318 09:01:42.055061 6976 generic.go:334] "Generic (PLEG): container finished" podID="4e919445-81d0-4663-8941-f596d8121305" containerID="8570ba4062451d636e731c51df710a874c0ddb21fcafd404781319bf89550cbc" exitCode=1 Mar 18 09:01:42.055177 master-0 kubenswrapper[6976]: I0318 09:01:42.055118 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" event={"ID":"4e919445-81d0-4663-8941-f596d8121305","Type":"ContainerDied","Data":"8570ba4062451d636e731c51df710a874c0ddb21fcafd404781319bf89550cbc"} Mar 18 09:01:42.055228 master-0 kubenswrapper[6976]: I0318 09:01:42.055209 6976 scope.go:117] "RemoveContainer" containerID="f6285da562144cf437a330aa3cbd3762a2abfd67f37bb901285417c5c38e8ab8" Mar 18 09:01:42.055885 master-0 kubenswrapper[6976]: I0318 09:01:42.055871 6976 scope.go:117] "RemoveContainer" containerID="8570ba4062451d636e731c51df710a874c0ddb21fcafd404781319bf89550cbc" Mar 18 09:01:42.056249 master-0 kubenswrapper[6976]: E0318 09:01:42.056230 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-qnc62_openshift-cluster-storage-operator(4e919445-81d0-4663-8941-f596d8121305)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" podUID="4e919445-81d0-4663-8941-f596d8121305" Mar 18 09:01:43.067189 master-0 kubenswrapper[6976]: I0318 09:01:43.067123 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qnc62_4e919445-81d0-4663-8941-f596d8121305/snapshot-controller/3.log" Mar 18 09:01:45.085156 master-0 kubenswrapper[6976]: I0318 09:01:45.084986 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mcd6d_eb8f3615-9e89-4b51-87a2-7d168c81adf3/cluster-baremetal-operator/1.log" Mar 18 09:01:45.086433 master-0 kubenswrapper[6976]: I0318 09:01:45.086342 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mcd6d_eb8f3615-9e89-4b51-87a2-7d168c81adf3/cluster-baremetal-operator/0.log" Mar 18 09:01:45.086531 master-0 kubenswrapper[6976]: I0318 09:01:45.086466 6976 generic.go:334] "Generic (PLEG): container finished" podID="eb8f3615-9e89-4b51-87a2-7d168c81adf3" containerID="2acf0cea8b1392ffa9520a8d120668aa5dceff5734023e4ff18420eb0b6a71d5" exitCode=1 Mar 18 09:01:45.086634 master-0 kubenswrapper[6976]: I0318 09:01:45.086528 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" event={"ID":"eb8f3615-9e89-4b51-87a2-7d168c81adf3","Type":"ContainerDied","Data":"2acf0cea8b1392ffa9520a8d120668aa5dceff5734023e4ff18420eb0b6a71d5"} Mar 18 09:01:45.086712 master-0 kubenswrapper[6976]: I0318 09:01:45.086657 6976 scope.go:117] "RemoveContainer" containerID="0caedbadbfcaeb7785b9d06130fc6e0d2a7ecb9753168035bbf898c397b762cf" Mar 18 09:01:45.087799 master-0 kubenswrapper[6976]: I0318 09:01:45.087754 6976 scope.go:117] "RemoveContainer" containerID="2acf0cea8b1392ffa9520a8d120668aa5dceff5734023e4ff18420eb0b6a71d5" Mar 18 09:01:45.088783 master-0 kubenswrapper[6976]: E0318 09:01:45.088355 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-6f69995874-mcd6d_openshift-machine-api(eb8f3615-9e89-4b51-87a2-7d168c81adf3)\"" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" podUID="eb8f3615-9e89-4b51-87a2-7d168c81adf3" Mar 18 09:01:45.778457 master-0 kubenswrapper[6976]: E0318 09:01:45.778378 6976 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 09:01:46.097278 master-0 kubenswrapper[6976]: I0318 09:01:46.097127 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mcd6d_eb8f3615-9e89-4b51-87a2-7d168c81adf3/cluster-baremetal-operator/1.log" Mar 18 09:01:46.098211 master-0 kubenswrapper[6976]: I0318 09:01:46.098065 6976 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="a9391117-4261-4eba-b3f6-0d77562a8375" Mar 18 09:01:46.098211 master-0 kubenswrapper[6976]: I0318 09:01:46.098100 6976 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="a9391117-4261-4eba-b3f6-0d77562a8375" Mar 18 09:01:49.185979 master-0 kubenswrapper[6976]: E0318 09:01:49.185663 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 09:01:52.599249 master-0 kubenswrapper[6976]: I0318 09:01:52.599096 6976 scope.go:117] "RemoveContainer" containerID="3251669fc25c1285249f7f12096305de8608ba905c5c140d1ecff87834122e35" Mar 18 09:01:52.600129 master-0 kubenswrapper[6976]: E0318 09:01:52.599473 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 09:01:56.598518 master-0 kubenswrapper[6976]: I0318 09:01:56.598417 6976 scope.go:117] "RemoveContainer" containerID="8570ba4062451d636e731c51df710a874c0ddb21fcafd404781319bf89550cbc" Mar 18 09:01:56.599528 master-0 kubenswrapper[6976]: E0318 09:01:56.598699 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-qnc62_openshift-cluster-storage-operator(4e919445-81d0-4663-8941-f596d8121305)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" podUID="4e919445-81d0-4663-8941-f596d8121305" Mar 18 09:01:58.599039 master-0 kubenswrapper[6976]: I0318 09:01:58.598979 6976 scope.go:117] "RemoveContainer" containerID="2acf0cea8b1392ffa9520a8d120668aa5dceff5734023e4ff18420eb0b6a71d5" Mar 18 09:01:59.207004 master-0 kubenswrapper[6976]: I0318 09:01:59.206940 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mcd6d_eb8f3615-9e89-4b51-87a2-7d168c81adf3/cluster-baremetal-operator/1.log" Mar 18 09:01:59.208670 master-0 kubenswrapper[6976]: I0318 09:01:59.207613 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" event={"ID":"eb8f3615-9e89-4b51-87a2-7d168c81adf3","Type":"ContainerStarted","Data":"968ae8479a0331117d0f148ecc19dfe89ce58e4b9ba1088bdc7b07d7a970e857"} Mar 18 09:02:00.216846 master-0 kubenswrapper[6976]: I0318 09:02:00.216751 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-4cxfh_bf7a3329-a04c-4b58-9364-b907c00cbe08/ingress-operator/4.log" Mar 18 09:02:00.217837 master-0 kubenswrapper[6976]: I0318 09:02:00.217749 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-4cxfh_bf7a3329-a04c-4b58-9364-b907c00cbe08/ingress-operator/3.log" Mar 18 09:02:00.218415 master-0 kubenswrapper[6976]: I0318 09:02:00.218310 6976 generic.go:334] "Generic (PLEG): container finished" podID="bf7a3329-a04c-4b58-9364-b907c00cbe08" containerID="0914d593ef977819ab7dc6ab7b6e7409b6b30b0704e79df20d36fbbb266a5b50" exitCode=1 Mar 18 09:02:00.218415 master-0 kubenswrapper[6976]: I0318 09:02:00.218388 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" event={"ID":"bf7a3329-a04c-4b58-9364-b907c00cbe08","Type":"ContainerDied","Data":"0914d593ef977819ab7dc6ab7b6e7409b6b30b0704e79df20d36fbbb266a5b50"} Mar 18 09:02:00.218644 master-0 kubenswrapper[6976]: I0318 09:02:00.218463 6976 scope.go:117] "RemoveContainer" containerID="736d8fa2eb3b5d4bf2fa6ebaf328e5d0f20bdff1b6da32fa492c4e843bc10226" Mar 18 09:02:00.219636 master-0 kubenswrapper[6976]: I0318 09:02:00.219537 6976 scope.go:117] "RemoveContainer" containerID="0914d593ef977819ab7dc6ab7b6e7409b6b30b0704e79df20d36fbbb266a5b50" Mar 18 09:02:00.220882 master-0 kubenswrapper[6976]: E0318 09:02:00.220359 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-4cxfh_openshift-ingress-operator(bf7a3329-a04c-4b58-9364-b907c00cbe08)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" podUID="bf7a3329-a04c-4b58-9364-b907c00cbe08" Mar 18 09:02:01.227825 master-0 kubenswrapper[6976]: I0318 09:02:01.227733 6976 generic.go:334] "Generic (PLEG): container finished" podID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerID="dbc1cb6940e9efff07d651c65a18c59c674dd8bccc10c54e3755e80079c9084e" exitCode=0 Mar 18 09:02:01.227825 master-0 kubenswrapper[6976]: I0318 09:02:01.227817 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" event={"ID":"93cb5ef1-e8f1-4d11-8c93-1abf24626176","Type":"ContainerDied","Data":"dbc1cb6940e9efff07d651c65a18c59c674dd8bccc10c54e3755e80079c9084e"} Mar 18 09:02:01.228958 master-0 kubenswrapper[6976]: I0318 09:02:01.227845 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" event={"ID":"93cb5ef1-e8f1-4d11-8c93-1abf24626176","Type":"ContainerStarted","Data":"132ae934391c5a67391af4a09e4d14d10769a3ae61f20d05f3107436d6c72dd0"} Mar 18 09:02:01.228958 master-0 kubenswrapper[6976]: I0318 09:02:01.227865 6976 scope.go:117] "RemoveContainer" containerID="8822d8d1cd61ab70d73bc23715778ff88e202eedade5838abd00a7ee1f05085e" Mar 18 09:02:01.230677 master-0 kubenswrapper[6976]: I0318 09:02:01.230620 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-4cxfh_bf7a3329-a04c-4b58-9364-b907c00cbe08/ingress-operator/4.log" Mar 18 09:02:01.277253 master-0 kubenswrapper[6976]: I0318 09:02:01.277163 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 09:02:01.281117 master-0 kubenswrapper[6976]: I0318 09:02:01.281040 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:01.281117 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:01.281117 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:01.281117 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:01.281471 master-0 kubenswrapper[6976]: I0318 09:02:01.281137 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:02.245127 master-0 kubenswrapper[6976]: E0318 09:02:02.245026 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T09:01:52Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T09:01:52Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T09:01:52Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T09:01:52Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 18 09:02:02.280435 master-0 kubenswrapper[6976]: I0318 09:02:02.280326 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:02.280435 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:02.280435 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:02.280435 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:02.280435 master-0 kubenswrapper[6976]: I0318 09:02:02.280423 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:03.280588 master-0 kubenswrapper[6976]: I0318 09:02:03.280483 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:03.280588 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:03.280588 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:03.280588 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:03.281287 master-0 kubenswrapper[6976]: I0318 09:02:03.280632 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:04.280431 master-0 kubenswrapper[6976]: I0318 09:02:04.280365 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:04.280431 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:04.280431 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:04.280431 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:04.282011 master-0 kubenswrapper[6976]: I0318 09:02:04.281963 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:05.277812 master-0 kubenswrapper[6976]: I0318 09:02:05.277715 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 09:02:05.281432 master-0 kubenswrapper[6976]: I0318 09:02:05.281366 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:05.281432 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:05.281432 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:05.281432 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:05.282314 master-0 kubenswrapper[6976]: I0318 09:02:05.281443 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:05.598745 master-0 kubenswrapper[6976]: I0318 09:02:05.598539 6976 scope.go:117] "RemoveContainer" containerID="3251669fc25c1285249f7f12096305de8608ba905c5c140d1ecff87834122e35" Mar 18 09:02:05.599494 master-0 kubenswrapper[6976]: E0318 09:02:05.599448 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 09:02:06.187246 master-0 kubenswrapper[6976]: E0318 09:02:06.187195 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 09:02:06.281616 master-0 kubenswrapper[6976]: I0318 09:02:06.281503 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:06.281616 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:06.281616 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:06.281616 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:06.282746 master-0 kubenswrapper[6976]: I0318 09:02:06.281650 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:06.977674 master-0 kubenswrapper[6976]: E0318 09:02:06.977458 6976 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189de3ce772f8d22 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:BackOff,Message:Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:58:24.15621661 +0000 UTC m=+603.741818235,LastTimestamp:2026-03-18 08:58:34.248804281 +0000 UTC m=+613.834405916,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:02:07.279144 master-0 kubenswrapper[6976]: I0318 09:02:07.278987 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:07.279144 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:07.279144 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:07.279144 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:07.279144 master-0 kubenswrapper[6976]: I0318 09:02:07.279044 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:07.598221 master-0 kubenswrapper[6976]: I0318 09:02:07.598081 6976 scope.go:117] "RemoveContainer" containerID="8570ba4062451d636e731c51df710a874c0ddb21fcafd404781319bf89550cbc" Mar 18 09:02:07.598959 master-0 kubenswrapper[6976]: E0318 09:02:07.598425 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-qnc62_openshift-cluster-storage-operator(4e919445-81d0-4663-8941-f596d8121305)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" podUID="4e919445-81d0-4663-8941-f596d8121305" Mar 18 09:02:08.281245 master-0 kubenswrapper[6976]: I0318 09:02:08.281104 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:08.281245 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:08.281245 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:08.281245 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:08.281245 master-0 kubenswrapper[6976]: I0318 09:02:08.281213 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:09.280160 master-0 kubenswrapper[6976]: I0318 09:02:09.280056 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:09.280160 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:09.280160 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:09.280160 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:09.280160 master-0 kubenswrapper[6976]: I0318 09:02:09.280144 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:10.279774 master-0 kubenswrapper[6976]: I0318 09:02:10.279674 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:10.279774 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:10.279774 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:10.279774 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:10.280765 master-0 kubenswrapper[6976]: I0318 09:02:10.279775 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:10.627795 master-0 kubenswrapper[6976]: I0318 09:02:10.627647 6976 status_manager.go:851] "Failed to get status for pod" podUID="46f265536aba6292ead501bc9b49f327" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods bootstrap-kube-controller-manager-master-0)" Mar 18 09:02:11.281303 master-0 kubenswrapper[6976]: I0318 09:02:11.281209 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:11.281303 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:11.281303 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:11.281303 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:11.282509 master-0 kubenswrapper[6976]: I0318 09:02:11.281305 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:12.245406 master-0 kubenswrapper[6976]: E0318 09:02:12.245342 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:02:12.279189 master-0 kubenswrapper[6976]: I0318 09:02:12.279119 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:12.279189 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:12.279189 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:12.279189 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:12.279476 master-0 kubenswrapper[6976]: I0318 09:02:12.279225 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:13.280512 master-0 kubenswrapper[6976]: I0318 09:02:13.280456 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:13.280512 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:13.280512 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:13.280512 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:13.281525 master-0 kubenswrapper[6976]: I0318 09:02:13.281192 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:13.598810 master-0 kubenswrapper[6976]: I0318 09:02:13.598653 6976 scope.go:117] "RemoveContainer" containerID="0914d593ef977819ab7dc6ab7b6e7409b6b30b0704e79df20d36fbbb266a5b50" Mar 18 09:02:13.599082 master-0 kubenswrapper[6976]: E0318 09:02:13.598934 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-4cxfh_openshift-ingress-operator(bf7a3329-a04c-4b58-9364-b907c00cbe08)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" podUID="bf7a3329-a04c-4b58-9364-b907c00cbe08" Mar 18 09:02:14.280010 master-0 kubenswrapper[6976]: I0318 09:02:14.279930 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:14.280010 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:14.280010 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:14.280010 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:14.280679 master-0 kubenswrapper[6976]: I0318 09:02:14.280635 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:15.280385 master-0 kubenswrapper[6976]: I0318 09:02:15.280308 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:15.280385 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:15.280385 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:15.280385 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:15.281688 master-0 kubenswrapper[6976]: I0318 09:02:15.280403 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:16.281362 master-0 kubenswrapper[6976]: I0318 09:02:16.281275 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:16.281362 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:16.281362 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:16.281362 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:16.282489 master-0 kubenswrapper[6976]: I0318 09:02:16.281365 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:17.280516 master-0 kubenswrapper[6976]: I0318 09:02:17.280451 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:17.280516 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:17.280516 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:17.280516 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:17.280979 master-0 kubenswrapper[6976]: I0318 09:02:17.280530 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:18.279906 master-0 kubenswrapper[6976]: I0318 09:02:18.279846 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:18.279906 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:18.279906 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:18.279906 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:18.280529 master-0 kubenswrapper[6976]: I0318 09:02:18.279932 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:19.280491 master-0 kubenswrapper[6976]: I0318 09:02:19.280392 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:19.280491 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:19.280491 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:19.280491 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:19.281411 master-0 kubenswrapper[6976]: I0318 09:02:19.280501 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:20.100473 master-0 kubenswrapper[6976]: E0318 09:02:20.100419 6976 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 18 09:02:20.280048 master-0 kubenswrapper[6976]: I0318 09:02:20.279994 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:20.280048 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:20.280048 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:20.280048 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:20.280744 master-0 kubenswrapper[6976]: I0318 09:02:20.280685 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:20.597854 master-0 kubenswrapper[6976]: I0318 09:02:20.597806 6976 scope.go:117] "RemoveContainer" containerID="3251669fc25c1285249f7f12096305de8608ba905c5c140d1ecff87834122e35" Mar 18 09:02:20.598127 master-0 kubenswrapper[6976]: E0318 09:02:20.598031 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 09:02:21.280263 master-0 kubenswrapper[6976]: I0318 09:02:21.280183 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:21.280263 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:21.280263 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:21.280263 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:21.280602 master-0 kubenswrapper[6976]: I0318 09:02:21.280269 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:21.598250 master-0 kubenswrapper[6976]: I0318 09:02:21.598129 6976 scope.go:117] "RemoveContainer" containerID="8570ba4062451d636e731c51df710a874c0ddb21fcafd404781319bf89550cbc" Mar 18 09:02:21.598735 master-0 kubenswrapper[6976]: E0318 09:02:21.598376 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-qnc62_openshift-cluster-storage-operator(4e919445-81d0-4663-8941-f596d8121305)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" podUID="4e919445-81d0-4663-8941-f596d8121305" Mar 18 09:02:22.246616 master-0 kubenswrapper[6976]: E0318 09:02:22.246522 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:02:22.279935 master-0 kubenswrapper[6976]: I0318 09:02:22.279873 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:22.279935 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:22.279935 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:22.279935 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:22.280247 master-0 kubenswrapper[6976]: I0318 09:02:22.279936 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:23.189779 master-0 kubenswrapper[6976]: E0318 09:02:23.189665 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 09:02:23.281725 master-0 kubenswrapper[6976]: I0318 09:02:23.281642 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:23.281725 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:23.281725 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:23.281725 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:23.282048 master-0 kubenswrapper[6976]: I0318 09:02:23.281727 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:24.280994 master-0 kubenswrapper[6976]: I0318 09:02:24.280851 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:24.280994 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:24.280994 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:24.280994 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:24.280994 master-0 kubenswrapper[6976]: I0318 09:02:24.280962 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:24.598805 master-0 kubenswrapper[6976]: I0318 09:02:24.598670 6976 scope.go:117] "RemoveContainer" containerID="0914d593ef977819ab7dc6ab7b6e7409b6b30b0704e79df20d36fbbb266a5b50" Mar 18 09:02:24.599497 master-0 kubenswrapper[6976]: E0318 09:02:24.599456 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-4cxfh_openshift-ingress-operator(bf7a3329-a04c-4b58-9364-b907c00cbe08)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" podUID="bf7a3329-a04c-4b58-9364-b907c00cbe08" Mar 18 09:02:25.280709 master-0 kubenswrapper[6976]: I0318 09:02:25.280595 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:25.280709 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:25.280709 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:25.280709 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:25.280709 master-0 kubenswrapper[6976]: I0318 09:02:25.280699 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:26.280486 master-0 kubenswrapper[6976]: I0318 09:02:26.280402 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:26.280486 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:26.280486 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:26.280486 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:26.280486 master-0 kubenswrapper[6976]: I0318 09:02:26.280483 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:27.279384 master-0 kubenswrapper[6976]: I0318 09:02:27.279342 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:27.279384 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:27.279384 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:27.279384 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:27.280286 master-0 kubenswrapper[6976]: I0318 09:02:27.280251 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:28.279381 master-0 kubenswrapper[6976]: I0318 09:02:28.279338 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:28.279381 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:28.279381 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:28.279381 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:28.280118 master-0 kubenswrapper[6976]: I0318 09:02:28.280090 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:29.280537 master-0 kubenswrapper[6976]: I0318 09:02:29.280441 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:29.280537 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:29.280537 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:29.280537 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:29.281334 master-0 kubenswrapper[6976]: I0318 09:02:29.280608 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:30.281013 master-0 kubenswrapper[6976]: I0318 09:02:30.280938 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:30.281013 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:30.281013 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:30.281013 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:30.281013 master-0 kubenswrapper[6976]: I0318 09:02:30.281014 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:31.281160 master-0 kubenswrapper[6976]: I0318 09:02:31.281065 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:31.281160 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:31.281160 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:31.281160 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:31.281160 master-0 kubenswrapper[6976]: I0318 09:02:31.281144 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:31.599426 master-0 kubenswrapper[6976]: I0318 09:02:31.599243 6976 scope.go:117] "RemoveContainer" containerID="3251669fc25c1285249f7f12096305de8608ba905c5c140d1ecff87834122e35" Mar 18 09:02:31.599787 master-0 kubenswrapper[6976]: E0318 09:02:31.599712 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 09:02:32.248122 master-0 kubenswrapper[6976]: E0318 09:02:32.248005 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:02:32.279877 master-0 kubenswrapper[6976]: I0318 09:02:32.279786 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:32.279877 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:32.279877 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:32.279877 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:32.279877 master-0 kubenswrapper[6976]: I0318 09:02:32.279866 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:33.279382 master-0 kubenswrapper[6976]: I0318 09:02:33.279306 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:33.279382 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:33.279382 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:33.279382 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:33.279382 master-0 kubenswrapper[6976]: I0318 09:02:33.279376 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:34.280601 master-0 kubenswrapper[6976]: I0318 09:02:34.280484 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:34.280601 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:34.280601 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:34.280601 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:34.281729 master-0 kubenswrapper[6976]: I0318 09:02:34.280616 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:34.598851 master-0 kubenswrapper[6976]: I0318 09:02:34.598465 6976 scope.go:117] "RemoveContainer" containerID="8570ba4062451d636e731c51df710a874c0ddb21fcafd404781319bf89550cbc" Mar 18 09:02:35.280637 master-0 kubenswrapper[6976]: I0318 09:02:35.280533 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:35.280637 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:35.280637 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:35.280637 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:35.281627 master-0 kubenswrapper[6976]: I0318 09:02:35.280648 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:35.497398 master-0 kubenswrapper[6976]: I0318 09:02:35.497350 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qnc62_4e919445-81d0-4663-8941-f596d8121305/snapshot-controller/3.log" Mar 18 09:02:35.497751 master-0 kubenswrapper[6976]: I0318 09:02:35.497419 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" event={"ID":"4e919445-81d0-4663-8941-f596d8121305","Type":"ContainerStarted","Data":"97b6b0922d17ce30a0b9e74a3e377338947d2ced4f3ea98ad7676d4078ee6fa4"} Mar 18 09:02:36.280764 master-0 kubenswrapper[6976]: I0318 09:02:36.280686 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:36.280764 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:36.280764 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:36.280764 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:36.281804 master-0 kubenswrapper[6976]: I0318 09:02:36.280767 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:36.599466 master-0 kubenswrapper[6976]: I0318 09:02:36.599284 6976 scope.go:117] "RemoveContainer" containerID="0914d593ef977819ab7dc6ab7b6e7409b6b30b0704e79df20d36fbbb266a5b50" Mar 18 09:02:36.599889 master-0 kubenswrapper[6976]: E0318 09:02:36.599831 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-4cxfh_openshift-ingress-operator(bf7a3329-a04c-4b58-9364-b907c00cbe08)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" podUID="bf7a3329-a04c-4b58-9364-b907c00cbe08" Mar 18 09:02:37.281329 master-0 kubenswrapper[6976]: I0318 09:02:37.281153 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:37.281329 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:37.281329 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:37.281329 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:37.281329 master-0 kubenswrapper[6976]: I0318 09:02:37.281242 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:38.280095 master-0 kubenswrapper[6976]: I0318 09:02:38.280021 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:38.280095 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:38.280095 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:38.280095 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:38.280514 master-0 kubenswrapper[6976]: I0318 09:02:38.280110 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:39.279953 master-0 kubenswrapper[6976]: I0318 09:02:39.279886 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:39.279953 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:39.279953 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:39.279953 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:39.281186 master-0 kubenswrapper[6976]: I0318 09:02:39.281137 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:40.190997 master-0 kubenswrapper[6976]: E0318 09:02:40.190869 6976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 18 09:02:40.280114 master-0 kubenswrapper[6976]: I0318 09:02:40.280036 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:40.280114 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:40.280114 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:40.280114 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:40.280114 master-0 kubenswrapper[6976]: I0318 09:02:40.280108 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:40.979933 master-0 kubenswrapper[6976]: E0318 09:02:40.979797 6976 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189de35485acdacc kube-system 8777 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:49:40 +0000 UTC,LastTimestamp:2026-03-18 08:58:35.600783806 +0000 UTC m=+615.186385441,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:02:41.280400 master-0 kubenswrapper[6976]: I0318 09:02:41.280198 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:41.280400 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:41.280400 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:41.280400 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:41.280400 master-0 kubenswrapper[6976]: I0318 09:02:41.280329 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:42.249016 master-0 kubenswrapper[6976]: E0318 09:02:42.248926 6976 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 18 09:02:42.249016 master-0 kubenswrapper[6976]: E0318 09:02:42.249009 6976 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 09:02:42.280973 master-0 kubenswrapper[6976]: I0318 09:02:42.280736 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:42.280973 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:42.280973 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:42.280973 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:42.281550 master-0 kubenswrapper[6976]: I0318 09:02:42.281048 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:43.280943 master-0 kubenswrapper[6976]: I0318 09:02:43.280879 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:43.280943 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:43.280943 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:43.280943 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:43.282114 master-0 kubenswrapper[6976]: I0318 09:02:43.280973 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:43.598012 master-0 kubenswrapper[6976]: I0318 09:02:43.597905 6976 scope.go:117] "RemoveContainer" containerID="3251669fc25c1285249f7f12096305de8608ba905c5c140d1ecff87834122e35" Mar 18 09:02:43.598245 master-0 kubenswrapper[6976]: E0318 09:02:43.598203 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 09:02:44.280892 master-0 kubenswrapper[6976]: I0318 09:02:44.280791 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:44.280892 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:44.280892 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:44.280892 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:44.282259 master-0 kubenswrapper[6976]: I0318 09:02:44.280901 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:44.570090 master-0 kubenswrapper[6976]: I0318 09:02:44.570032 6976 generic.go:334] "Generic (PLEG): container finished" podID="2a864188-ada6-4ec2-bf9f-72dab210f0ce" containerID="0dee431f1bab8eafebe24c7c7116af4c82f57849d3fa9f78e391b177e72f8116" exitCode=0 Mar 18 09:02:44.570514 master-0 kubenswrapper[6976]: I0318 09:02:44.570211 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-9f7lz" event={"ID":"2a864188-ada6-4ec2-bf9f-72dab210f0ce","Type":"ContainerDied","Data":"0dee431f1bab8eafebe24c7c7116af4c82f57849d3fa9f78e391b177e72f8116"} Mar 18 09:02:44.571654 master-0 kubenswrapper[6976]: I0318 09:02:44.571431 6976 scope.go:117] "RemoveContainer" containerID="0dee431f1bab8eafebe24c7c7116af4c82f57849d3fa9f78e391b177e72f8116" Mar 18 09:02:44.573362 master-0 kubenswrapper[6976]: I0318 09:02:44.573322 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-tx2pv_e88b021c-c810-4a68-aa48-d8666b52330e/cluster-autoscaler-operator/0.log" Mar 18 09:02:44.574498 master-0 kubenswrapper[6976]: I0318 09:02:44.574452 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" event={"ID":"e88b021c-c810-4a68-aa48-d8666b52330e","Type":"ContainerDied","Data":"191e1385839aadfcf8fad00f70dd0c37383e76893667c6d202209b39b27d4f57"} Mar 18 09:02:44.575310 master-0 kubenswrapper[6976]: I0318 09:02:44.574388 6976 generic.go:334] "Generic (PLEG): container finished" podID="e88b021c-c810-4a68-aa48-d8666b52330e" containerID="191e1385839aadfcf8fad00f70dd0c37383e76893667c6d202209b39b27d4f57" exitCode=255 Mar 18 09:02:44.575615 master-0 kubenswrapper[6976]: I0318 09:02:44.575504 6976 scope.go:117] "RemoveContainer" containerID="191e1385839aadfcf8fad00f70dd0c37383e76893667c6d202209b39b27d4f57" Mar 18 09:02:45.280981 master-0 kubenswrapper[6976]: I0318 09:02:45.280899 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:45.280981 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:45.280981 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:45.280981 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:45.282972 master-0 kubenswrapper[6976]: I0318 09:02:45.280990 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:45.594091 master-0 kubenswrapper[6976]: I0318 09:02:45.593896 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-tx2pv_e88b021c-c810-4a68-aa48-d8666b52330e/cluster-autoscaler-operator/0.log" Mar 18 09:02:45.594826 master-0 kubenswrapper[6976]: I0318 09:02:45.594752 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" event={"ID":"e88b021c-c810-4a68-aa48-d8666b52330e","Type":"ContainerStarted","Data":"4f5b1a096140f763ec1eae87b20fb98fa36bba51c9dca96ef75e06d31cdcc421"} Mar 18 09:02:45.599154 master-0 kubenswrapper[6976]: I0318 09:02:45.599083 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-9f7lz" event={"ID":"2a864188-ada6-4ec2-bf9f-72dab210f0ce","Type":"ContainerStarted","Data":"77ab631369fcfa6258d16ca33c25c273fdef04ff7c0f651dd464d4b745545954"} Mar 18 09:02:46.279955 master-0 kubenswrapper[6976]: I0318 09:02:46.279840 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:46.279955 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:46.279955 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:46.279955 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:46.280502 master-0 kubenswrapper[6976]: I0318 09:02:46.279997 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:47.280783 master-0 kubenswrapper[6976]: I0318 09:02:47.280668 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:47.280783 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:47.280783 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:47.280783 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:47.281810 master-0 kubenswrapper[6976]: I0318 09:02:47.280817 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:48.280772 master-0 kubenswrapper[6976]: I0318 09:02:48.280606 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:48.280772 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:48.280772 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:48.280772 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:48.281881 master-0 kubenswrapper[6976]: I0318 09:02:48.280841 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:48.625233 master-0 kubenswrapper[6976]: I0318 09:02:48.625078 6976 generic.go:334] "Generic (PLEG): container finished" podID="d7205eeb-912b-4c31-b08f-ed0b2a1319aa" containerID="50fd77676f2fb32890abad0222ed7ebdb08546cdf39f1ddb90ccc00d539b7f06" exitCode=0 Mar 18 09:02:48.625233 master-0 kubenswrapper[6976]: I0318 09:02:48.625130 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" event={"ID":"d7205eeb-912b-4c31-b08f-ed0b2a1319aa","Type":"ContainerDied","Data":"50fd77676f2fb32890abad0222ed7ebdb08546cdf39f1ddb90ccc00d539b7f06"} Mar 18 09:02:48.625780 master-0 kubenswrapper[6976]: I0318 09:02:48.625742 6976 scope.go:117] "RemoveContainer" containerID="50fd77676f2fb32890abad0222ed7ebdb08546cdf39f1ddb90ccc00d539b7f06" Mar 18 09:02:49.279988 master-0 kubenswrapper[6976]: I0318 09:02:49.279796 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:49.279988 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:49.279988 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:49.279988 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:49.279988 master-0 kubenswrapper[6976]: I0318 09:02:49.279974 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:49.635157 master-0 kubenswrapper[6976]: I0318 09:02:49.635011 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" event={"ID":"d7205eeb-912b-4c31-b08f-ed0b2a1319aa","Type":"ContainerStarted","Data":"9b5c3a968b62d558f18a697ad4cc6241a023fdfe80fedbbca4b10994c91c931b"} Mar 18 09:02:50.280975 master-0 kubenswrapper[6976]: I0318 09:02:50.280878 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:50.280975 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:50.280975 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:50.280975 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:50.281684 master-0 kubenswrapper[6976]: I0318 09:02:50.280990 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:50.528586 master-0 kubenswrapper[6976]: I0318 09:02:50.528521 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-retry-2-master-0"] Mar 18 09:02:50.528862 master-0 kubenswrapper[6976]: E0318 09:02:50.528829 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93298cb2-d669-49ea-92be-8891f07ab1c5" containerName="installer" Mar 18 09:02:50.528862 master-0 kubenswrapper[6976]: I0318 09:02:50.528847 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="93298cb2-d669-49ea-92be-8891f07ab1c5" containerName="installer" Mar 18 09:02:50.528980 master-0 kubenswrapper[6976]: E0318 09:02:50.528877 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca7b84e-0aff-4526-948a-03492712ff8f" containerName="installer" Mar 18 09:02:50.528980 master-0 kubenswrapper[6976]: I0318 09:02:50.528890 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca7b84e-0aff-4526-948a-03492712ff8f" containerName="installer" Mar 18 09:02:50.528980 master-0 kubenswrapper[6976]: E0318 09:02:50.528911 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e4bcfe-d6ca-4799-9431-682673fe7380" containerName="installer" Mar 18 09:02:50.528980 master-0 kubenswrapper[6976]: I0318 09:02:50.528920 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e4bcfe-d6ca-4799-9431-682673fe7380" containerName="installer" Mar 18 09:02:50.529394 master-0 kubenswrapper[6976]: I0318 09:02:50.529068 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="08e4bcfe-d6ca-4799-9431-682673fe7380" containerName="installer" Mar 18 09:02:50.529394 master-0 kubenswrapper[6976]: I0318 09:02:50.529081 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="93298cb2-d669-49ea-92be-8891f07ab1c5" containerName="installer" Mar 18 09:02:50.529394 master-0 kubenswrapper[6976]: I0318 09:02:50.529098 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ca7b84e-0aff-4526-948a-03492712ff8f" containerName="installer" Mar 18 09:02:50.529820 master-0 kubenswrapper[6976]: I0318 09:02:50.529781 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:02:50.545441 master-0 kubenswrapper[6976]: I0318 09:02:50.545391 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-6mb4h" Mar 18 09:02:50.545826 master-0 kubenswrapper[6976]: I0318 09:02:50.545599 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 09:02:50.552184 master-0 kubenswrapper[6976]: I0318 09:02:50.552123 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-2-master-0"] Mar 18 09:02:50.580160 master-0 kubenswrapper[6976]: I0318 09:02:50.577712 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:02:50.580160 master-0 kubenswrapper[6976]: I0318 09:02:50.577873 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-var-lock\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:02:50.580160 master-0 kubenswrapper[6976]: I0318 09:02:50.577942 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kubelet-dir\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:02:50.680505 master-0 kubenswrapper[6976]: I0318 09:02:50.680414 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:02:50.688643 master-0 kubenswrapper[6976]: I0318 09:02:50.680646 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-var-lock\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:02:50.688643 master-0 kubenswrapper[6976]: I0318 09:02:50.680756 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kubelet-dir\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:02:50.688643 master-0 kubenswrapper[6976]: I0318 09:02:50.681077 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kubelet-dir\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:02:50.688643 master-0 kubenswrapper[6976]: I0318 09:02:50.681616 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-var-lock\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:02:50.701070 master-0 kubenswrapper[6976]: I0318 09:02:50.700999 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:02:50.860006 master-0 kubenswrapper[6976]: I0318 09:02:50.859871 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:02:51.269608 master-0 kubenswrapper[6976]: I0318 09:02:51.269514 6976 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-whh6r container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 18 09:02:51.269864 master-0 kubenswrapper[6976]: I0318 09:02:51.269630 6976 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" podUID="95143c61-6f91-4cd4-9411-31c2fb75d4d0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 18 09:02:51.283943 master-0 kubenswrapper[6976]: I0318 09:02:51.283826 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:51.283943 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:51.283943 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:51.283943 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:51.283943 master-0 kubenswrapper[6976]: I0318 09:02:51.283890 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:51.287398 master-0 kubenswrapper[6976]: I0318 09:02:51.287342 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-2-master-0"] Mar 18 09:02:51.295257 master-0 kubenswrapper[6976]: I0318 09:02:51.295193 6976 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 18 09:02:51.368261 master-0 kubenswrapper[6976]: W0318 09:02:51.368202 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc46fcf39_9167_4ec2_9d2c_0a622bc69d13.slice/crio-181944668b8a2ce83ab0c8df1ad74ddf1e053adffb02e319eb1d45759d68acf0 WatchSource:0}: Error finding container 181944668b8a2ce83ab0c8df1ad74ddf1e053adffb02e319eb1d45759d68acf0: Status 404 returned error can't find the container with id 181944668b8a2ce83ab0c8df1ad74ddf1e053adffb02e319eb1d45759d68acf0 Mar 18 09:02:51.598976 master-0 kubenswrapper[6976]: I0318 09:02:51.598890 6976 scope.go:117] "RemoveContainer" containerID="0914d593ef977819ab7dc6ab7b6e7409b6b30b0704e79df20d36fbbb266a5b50" Mar 18 09:02:51.599281 master-0 kubenswrapper[6976]: E0318 09:02:51.599212 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-4cxfh_openshift-ingress-operator(bf7a3329-a04c-4b58-9364-b907c00cbe08)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" podUID="bf7a3329-a04c-4b58-9364-b907c00cbe08" Mar 18 09:02:51.651416 master-0 kubenswrapper[6976]: I0318 09:02:51.651208 6976 generic.go:334] "Generic (PLEG): container finished" podID="c5c995cf-40a0-4cd6-87fa-96a522f7bc57" containerID="f746e038f97898d00b98367b1de674491c64f30a9f70b4c41c7083bf263f99b2" exitCode=0 Mar 18 09:02:51.651416 master-0 kubenswrapper[6976]: I0318 09:02:51.651316 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-lhcpp" event={"ID":"c5c995cf-40a0-4cd6-87fa-96a522f7bc57","Type":"ContainerDied","Data":"f746e038f97898d00b98367b1de674491c64f30a9f70b4c41c7083bf263f99b2"} Mar 18 09:02:51.652032 master-0 kubenswrapper[6976]: I0318 09:02:51.651973 6976 scope.go:117] "RemoveContainer" containerID="f746e038f97898d00b98367b1de674491c64f30a9f70b4c41c7083bf263f99b2" Mar 18 09:02:51.654678 master-0 kubenswrapper[6976]: I0318 09:02:51.654635 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-9s8lp_1deb139f-1903-417e-835c-28abdd156cdb/cluster-node-tuning-operator/0.log" Mar 18 09:02:51.654853 master-0 kubenswrapper[6976]: I0318 09:02:51.654707 6976 generic.go:334] "Generic (PLEG): container finished" podID="1deb139f-1903-417e-835c-28abdd156cdb" containerID="32b058c6d1ee238c753a849a50cae740263263767c61bf2151475052399455e0" exitCode=1 Mar 18 09:02:51.654853 master-0 kubenswrapper[6976]: I0318 09:02:51.654777 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" event={"ID":"1deb139f-1903-417e-835c-28abdd156cdb","Type":"ContainerDied","Data":"32b058c6d1ee238c753a849a50cae740263263767c61bf2151475052399455e0"} Mar 18 09:02:51.655162 master-0 kubenswrapper[6976]: I0318 09:02:51.655127 6976 scope.go:117] "RemoveContainer" containerID="32b058c6d1ee238c753a849a50cae740263263767c61bf2151475052399455e0" Mar 18 09:02:51.665778 master-0 kubenswrapper[6976]: I0318 09:02:51.665708 6976 generic.go:334] "Generic (PLEG): container finished" podID="bb6ef4c4-bff3-4559-8e42-582bbd668b7c" containerID="94a0ef05ccdfbfbab75ff3d50bbf9ce2c5410905e297dadef1700e3589016d40" exitCode=0 Mar 18 09:02:51.665932 master-0 kubenswrapper[6976]: I0318 09:02:51.665850 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" event={"ID":"bb6ef4c4-bff3-4559-8e42-582bbd668b7c","Type":"ContainerDied","Data":"94a0ef05ccdfbfbab75ff3d50bbf9ce2c5410905e297dadef1700e3589016d40"} Mar 18 09:02:51.665932 master-0 kubenswrapper[6976]: I0318 09:02:51.665918 6976 scope.go:117] "RemoveContainer" containerID="9cdce5f3b67476e4d83692d6a7f121d082ca7bc4e1f5227b44f8955003a46e71" Mar 18 09:02:51.666766 master-0 kubenswrapper[6976]: I0318 09:02:51.666699 6976 scope.go:117] "RemoveContainer" containerID="94a0ef05ccdfbfbab75ff3d50bbf9ce2c5410905e297dadef1700e3589016d40" Mar 18 09:02:51.669681 master-0 kubenswrapper[6976]: I0318 09:02:51.669621 6976 generic.go:334] "Generic (PLEG): container finished" podID="680006ef-a955-491e-b6a3-1ca7fcc20165" containerID="f668ca32df6831c1852bfec6ac04b2b91b947fda7bf3560ef4ffe10748867750" exitCode=0 Mar 18 09:02:51.669851 master-0 kubenswrapper[6976]: I0318 09:02:51.669731 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-fhj95" event={"ID":"680006ef-a955-491e-b6a3-1ca7fcc20165","Type":"ContainerDied","Data":"f668ca32df6831c1852bfec6ac04b2b91b947fda7bf3560ef4ffe10748867750"} Mar 18 09:02:51.670840 master-0 kubenswrapper[6976]: I0318 09:02:51.670788 6976 scope.go:117] "RemoveContainer" containerID="f668ca32df6831c1852bfec6ac04b2b91b947fda7bf3560ef4ffe10748867750" Mar 18 09:02:51.671817 master-0 kubenswrapper[6976]: I0318 09:02:51.671761 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-2-master-0" event={"ID":"c46fcf39-9167-4ec2-9d2c-0a622bc69d13","Type":"ContainerStarted","Data":"181944668b8a2ce83ab0c8df1ad74ddf1e053adffb02e319eb1d45759d68acf0"} Mar 18 09:02:51.675427 master-0 kubenswrapper[6976]: I0318 09:02:51.675367 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-k6xp5_2d0da6e3-3887-4361-8eae-e7447f9ff72c/package-server-manager/0.log" Mar 18 09:02:51.676113 master-0 kubenswrapper[6976]: I0318 09:02:51.676054 6976 generic.go:334] "Generic (PLEG): container finished" podID="2d0da6e3-3887-4361-8eae-e7447f9ff72c" containerID="eff8515f7824ab4366b3686f83336181d1ef884da04bbecf12f9008db8dde14c" exitCode=1 Mar 18 09:02:51.676254 master-0 kubenswrapper[6976]: I0318 09:02:51.676148 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" event={"ID":"2d0da6e3-3887-4361-8eae-e7447f9ff72c","Type":"ContainerDied","Data":"eff8515f7824ab4366b3686f83336181d1ef884da04bbecf12f9008db8dde14c"} Mar 18 09:02:51.677143 master-0 kubenswrapper[6976]: I0318 09:02:51.676948 6976 scope.go:117] "RemoveContainer" containerID="eff8515f7824ab4366b3686f83336181d1ef884da04bbecf12f9008db8dde14c" Mar 18 09:02:51.691160 master-0 kubenswrapper[6976]: I0318 09:02:51.691088 6976 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="6ee1233c71c5af7063fbaa44082f3735eb10f8e872e4942be3116550d1869f80" exitCode=0 Mar 18 09:02:51.691997 master-0 kubenswrapper[6976]: I0318 09:02:51.691206 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"6ee1233c71c5af7063fbaa44082f3735eb10f8e872e4942be3116550d1869f80"} Mar 18 09:02:51.692099 master-0 kubenswrapper[6976]: I0318 09:02:51.692068 6976 scope.go:117] "RemoveContainer" containerID="3251669fc25c1285249f7f12096305de8608ba905c5c140d1ecff87834122e35" Mar 18 09:02:51.692173 master-0 kubenswrapper[6976]: I0318 09:02:51.692109 6976 scope.go:117] "RemoveContainer" containerID="6ee1233c71c5af7063fbaa44082f3735eb10f8e872e4942be3116550d1869f80" Mar 18 09:02:51.704549 master-0 kubenswrapper[6976]: I0318 09:02:51.704217 6976 generic.go:334] "Generic (PLEG): container finished" podID="a0cd1cf7-be6f-4baf-8761-69c693476de9" containerID="99ea637f908899f3c91ea05ee2b0d7e3ac50162756d8cfe11cb446dfbb2129bd" exitCode=0 Mar 18 09:02:51.704549 master-0 kubenswrapper[6976]: I0318 09:02:51.704298 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" event={"ID":"a0cd1cf7-be6f-4baf-8761-69c693476de9","Type":"ContainerDied","Data":"99ea637f908899f3c91ea05ee2b0d7e3ac50162756d8cfe11cb446dfbb2129bd"} Mar 18 09:02:51.705313 master-0 kubenswrapper[6976]: I0318 09:02:51.704853 6976 scope.go:117] "RemoveContainer" containerID="99ea637f908899f3c91ea05ee2b0d7e3ac50162756d8cfe11cb446dfbb2129bd" Mar 18 09:02:51.711807 master-0 kubenswrapper[6976]: I0318 09:02:51.711762 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-pp4r9_65cff83a-8d8f-4e4f-96ef-99941c29ba53/kube-apiserver-operator/1.log" Mar 18 09:02:51.711884 master-0 kubenswrapper[6976]: I0318 09:02:51.711850 6976 generic.go:334] "Generic (PLEG): container finished" podID="65cff83a-8d8f-4e4f-96ef-99941c29ba53" containerID="26f8c4214ea54fb5e2ff7d9fa93e91ddc6301a4725fdb41f15e4fe0ec185b735" exitCode=0 Mar 18 09:02:51.712028 master-0 kubenswrapper[6976]: I0318 09:02:51.711981 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" event={"ID":"65cff83a-8d8f-4e4f-96ef-99941c29ba53","Type":"ContainerDied","Data":"26f8c4214ea54fb5e2ff7d9fa93e91ddc6301a4725fdb41f15e4fe0ec185b735"} Mar 18 09:02:51.712784 master-0 kubenswrapper[6976]: I0318 09:02:51.712744 6976 scope.go:117] "RemoveContainer" containerID="26f8c4214ea54fb5e2ff7d9fa93e91ddc6301a4725fdb41f15e4fe0ec185b735" Mar 18 09:02:51.717460 master-0 kubenswrapper[6976]: I0318 09:02:51.717350 6976 generic.go:334] "Generic (PLEG): container finished" podID="b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd" containerID="eca3cc2c6f8e3aeae9e8d1a0e8694ecad0c3c1ccd8351a14dff6726fb181ef90" exitCode=0 Mar 18 09:02:51.717710 master-0 kubenswrapper[6976]: I0318 09:02:51.717536 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" event={"ID":"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd","Type":"ContainerDied","Data":"eca3cc2c6f8e3aeae9e8d1a0e8694ecad0c3c1ccd8351a14dff6726fb181ef90"} Mar 18 09:02:51.719772 master-0 kubenswrapper[6976]: I0318 09:02:51.718527 6976 scope.go:117] "RemoveContainer" containerID="eca3cc2c6f8e3aeae9e8d1a0e8694ecad0c3c1ccd8351a14dff6726fb181ef90" Mar 18 09:02:51.722131 master-0 kubenswrapper[6976]: I0318 09:02:51.722093 6976 generic.go:334] "Generic (PLEG): container finished" podID="95143c61-6f91-4cd4-9411-31c2fb75d4d0" containerID="40b12e3472fb68e00bb6ce887f00cd26e55268f567f01e14fdcd62a66e212074" exitCode=0 Mar 18 09:02:51.722239 master-0 kubenswrapper[6976]: I0318 09:02:51.722181 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" event={"ID":"95143c61-6f91-4cd4-9411-31c2fb75d4d0","Type":"ContainerDied","Data":"40b12e3472fb68e00bb6ce887f00cd26e55268f567f01e14fdcd62a66e212074"} Mar 18 09:02:51.723280 master-0 kubenswrapper[6976]: I0318 09:02:51.723256 6976 scope.go:117] "RemoveContainer" containerID="40b12e3472fb68e00bb6ce887f00cd26e55268f567f01e14fdcd62a66e212074" Mar 18 09:02:51.725626 master-0 kubenswrapper[6976]: I0318 09:02:51.725589 6976 generic.go:334] "Generic (PLEG): container finished" podID="6c56e1ac-8752-4e46-8692-93716087f0e0" containerID="e78bbb854e3d9943cb3fa89e45e1e19c6f32f1732fab0adc69b2c8517be93fa3" exitCode=0 Mar 18 09:02:51.725714 master-0 kubenswrapper[6976]: I0318 09:02:51.725672 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" event={"ID":"6c56e1ac-8752-4e46-8692-93716087f0e0","Type":"ContainerDied","Data":"e78bbb854e3d9943cb3fa89e45e1e19c6f32f1732fab0adc69b2c8517be93fa3"} Mar 18 09:02:51.726150 master-0 kubenswrapper[6976]: I0318 09:02:51.726111 6976 scope.go:117] "RemoveContainer" containerID="e78bbb854e3d9943cb3fa89e45e1e19c6f32f1732fab0adc69b2c8517be93fa3" Mar 18 09:02:51.728077 master-0 kubenswrapper[6976]: I0318 09:02:51.728038 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-n4t2h_fdb52116-9c55-4464-99c8-fc2e4559996b/machine-api-operator/0.log" Mar 18 09:02:51.728457 master-0 kubenswrapper[6976]: I0318 09:02:51.728415 6976 generic.go:334] "Generic (PLEG): container finished" podID="fdb52116-9c55-4464-99c8-fc2e4559996b" containerID="bdeb3e204eeda9a4ca5f0b606295f7a8a8b0db7e2e36aab9adc87281923f44e9" exitCode=255 Mar 18 09:02:51.728532 master-0 kubenswrapper[6976]: I0318 09:02:51.728491 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" event={"ID":"fdb52116-9c55-4464-99c8-fc2e4559996b","Type":"ContainerDied","Data":"bdeb3e204eeda9a4ca5f0b606295f7a8a8b0db7e2e36aab9adc87281923f44e9"} Mar 18 09:02:51.729030 master-0 kubenswrapper[6976]: I0318 09:02:51.728981 6976 scope.go:117] "RemoveContainer" containerID="bdeb3e204eeda9a4ca5f0b606295f7a8a8b0db7e2e36aab9adc87281923f44e9" Mar 18 09:02:51.734723 master-0 kubenswrapper[6976]: I0318 09:02:51.734686 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl_be2682e4-cb63-4102-a83e-ef28023e273a/kube-storage-version-migrator-operator/1.log" Mar 18 09:02:51.734807 master-0 kubenswrapper[6976]: I0318 09:02:51.734734 6976 generic.go:334] "Generic (PLEG): container finished" podID="be2682e4-cb63-4102-a83e-ef28023e273a" containerID="e0c10cb728f84836bdf3fdacd9f7ace9b139b03a5e08557846d8eceff033db2d" exitCode=0 Mar 18 09:02:51.734807 master-0 kubenswrapper[6976]: I0318 09:02:51.734763 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" event={"ID":"be2682e4-cb63-4102-a83e-ef28023e273a","Type":"ContainerDied","Data":"e0c10cb728f84836bdf3fdacd9f7ace9b139b03a5e08557846d8eceff033db2d"} Mar 18 09:02:51.735190 master-0 kubenswrapper[6976]: I0318 09:02:51.735084 6976 scope.go:117] "RemoveContainer" containerID="e0c10cb728f84836bdf3fdacd9f7ace9b139b03a5e08557846d8eceff033db2d" Mar 18 09:02:51.766920 master-0 kubenswrapper[6976]: I0318 09:02:51.766735 6976 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 09:02:51.844446 master-0 kubenswrapper[6976]: I0318 09:02:51.844399 6976 scope.go:117] "RemoveContainer" containerID="2e60113a55bc3fdf5ffd475c0a2b9ffa85c87d1620b1886f6cf55bbb6b1809ed" Mar 18 09:02:51.947086 master-0 kubenswrapper[6976]: I0318 09:02:51.947012 6976 scope.go:117] "RemoveContainer" containerID="35bec5aad4d31f588044876420b3abf5aa56e6a349124b911e43ef3a01a96e33" Mar 18 09:02:52.058051 master-0 kubenswrapper[6976]: I0318 09:02:52.058010 6976 scope.go:117] "RemoveContainer" containerID="65e3988f2be17b2abc550a4cf35f76189f8aca364b91625f45824c3c0a649d5f" Mar 18 09:02:52.086469 master-0 kubenswrapper[6976]: I0318 09:02:52.086388 6976 scope.go:117] "RemoveContainer" containerID="9386051748bed6f19f4cd27daf0e83a55db215d4d03fde7c071e527ef1bc497c" Mar 18 09:02:52.298773 master-0 kubenswrapper[6976]: I0318 09:02:52.298208 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:52.298773 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:52.298773 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:52.298773 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:52.305014 master-0 kubenswrapper[6976]: I0318 09:02:52.304971 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:52.355658 master-0 kubenswrapper[6976]: E0318 09:02:52.355482 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 09:02:52.745965 master-0 kubenswrapper[6976]: I0318 09:02:52.745388 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" event={"ID":"95143c61-6f91-4cd4-9411-31c2fb75d4d0","Type":"ContainerStarted","Data":"e8cd059870c802ff3fdfa21cb82c57c0674dfa32ec84f5d0c29f5b8b3041ec4d"} Mar 18 09:02:52.746584 master-0 kubenswrapper[6976]: I0318 09:02:52.746545 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 09:02:52.747997 master-0 kubenswrapper[6976]: I0318 09:02:52.747962 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-9s8lp_1deb139f-1903-417e-835c-28abdd156cdb/cluster-node-tuning-operator/0.log" Mar 18 09:02:52.748064 master-0 kubenswrapper[6976]: I0318 09:02:52.748025 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" event={"ID":"1deb139f-1903-417e-835c-28abdd156cdb","Type":"ContainerStarted","Data":"7dc072b220c8a283904041c11068c4527f8175f7ab46611a81a126e49dea28d6"} Mar 18 09:02:52.751173 master-0 kubenswrapper[6976]: I0318 09:02:52.751149 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" event={"ID":"6c56e1ac-8752-4e46-8692-93716087f0e0","Type":"ContainerStarted","Data":"083fd46547a930d8062ce1b5df56b89be20a8d1bf489685fbdcc62dfbd9503af"} Mar 18 09:02:52.753174 master-0 kubenswrapper[6976]: I0318 09:02:52.753151 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" event={"ID":"be2682e4-cb63-4102-a83e-ef28023e273a","Type":"ContainerStarted","Data":"014a29173b6d4a95286c456292e5380b0d493143f02314cea18f2d053904ff2d"} Mar 18 09:02:52.755129 master-0 kubenswrapper[6976]: I0318 09:02:52.755095 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" event={"ID":"bb6ef4c4-bff3-4559-8e42-582bbd668b7c","Type":"ContainerStarted","Data":"02a0365da7873aab77984e042787a4b634abe49052d25eb6d4274af89eddf53c"} Mar 18 09:02:52.757433 master-0 kubenswrapper[6976]: I0318 09:02:52.757400 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"cd8f1b2378c428693218d79b09a56c9b55b51bb98be0e6bcf8f6074d75fc8fec"} Mar 18 09:02:52.757736 master-0 kubenswrapper[6976]: I0318 09:02:52.757705 6976 scope.go:117] "RemoveContainer" containerID="3251669fc25c1285249f7f12096305de8608ba905c5c140d1ecff87834122e35" Mar 18 09:02:52.757925 master-0 kubenswrapper[6976]: E0318 09:02:52.757890 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(46f265536aba6292ead501bc9b49f327)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" Mar 18 09:02:52.759847 master-0 kubenswrapper[6976]: I0318 09:02:52.759814 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-k6xp5_2d0da6e3-3887-4361-8eae-e7447f9ff72c/package-server-manager/0.log" Mar 18 09:02:52.760219 master-0 kubenswrapper[6976]: I0318 09:02:52.760181 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" event={"ID":"2d0da6e3-3887-4361-8eae-e7447f9ff72c","Type":"ContainerStarted","Data":"b379cbef2ec9a2461be8a8ee103538764adaafe87ec412533ae47fa80f6b3bc3"} Mar 18 09:02:52.760646 master-0 kubenswrapper[6976]: I0318 09:02:52.760617 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 09:02:52.763301 master-0 kubenswrapper[6976]: I0318 09:02:52.762586 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-fhj95" event={"ID":"680006ef-a955-491e-b6a3-1ca7fcc20165","Type":"ContainerStarted","Data":"dd40da302292d1a10a4dc9ad49415905a1797d5c1f96d1f843dde2359fc6c889"} Mar 18 09:02:52.764030 master-0 kubenswrapper[6976]: I0318 09:02:52.763984 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-2-master-0" event={"ID":"c46fcf39-9167-4ec2-9d2c-0a622bc69d13","Type":"ContainerStarted","Data":"ee10cfeeb8c93ff8e40f81f0386b22a513e8b6ef1f61583ef7f0a572ddbf099a"} Mar 18 09:02:52.767800 master-0 kubenswrapper[6976]: I0318 09:02:52.767757 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" event={"ID":"a0cd1cf7-be6f-4baf-8761-69c693476de9","Type":"ContainerStarted","Data":"b347f0e5d8bff4fd2d586b797dd7a03d3562e7b8c89f15d53c4dd9e6188cf322"} Mar 18 09:02:52.770929 master-0 kubenswrapper[6976]: I0318 09:02:52.770560 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" event={"ID":"65cff83a-8d8f-4e4f-96ef-99941c29ba53","Type":"ContainerStarted","Data":"33494a6d44eca243bebecebd05ec26d951ef335b6d0b0d245a9e3e38bc6560cf"} Mar 18 09:02:52.772883 master-0 kubenswrapper[6976]: I0318 09:02:52.772479 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-lhcpp" event={"ID":"c5c995cf-40a0-4cd6-87fa-96a522f7bc57","Type":"ContainerStarted","Data":"568f060dcae64325b570159a98d0915ebbb1ff43c71c558863e9b03e66903dcd"} Mar 18 09:02:52.775472 master-0 kubenswrapper[6976]: I0318 09:02:52.775448 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" event={"ID":"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd","Type":"ContainerStarted","Data":"261f3b2de07642dba68c7e08e4d41c947b9c7e5857793cf7979bfda6a0a8b63a"} Mar 18 09:02:52.782378 master-0 kubenswrapper[6976]: I0318 09:02:52.782339 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-n4t2h_fdb52116-9c55-4464-99c8-fc2e4559996b/machine-api-operator/0.log" Mar 18 09:02:52.782752 master-0 kubenswrapper[6976]: I0318 09:02:52.782719 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" event={"ID":"fdb52116-9c55-4464-99c8-fc2e4559996b","Type":"ContainerStarted","Data":"1288b2dc2de89f27ed065eae5132f97c1959fed907bdcafd2fd2b861cd249573"} Mar 18 09:02:52.952162 master-0 kubenswrapper[6976]: I0318 09:02:52.952023 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-retry-2-master-0" podStartSLOduration=2.951997356 podStartE2EDuration="2.951997356s" podCreationTimestamp="2026-03-18 09:02:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:02:52.948796524 +0000 UTC m=+872.534398159" watchObservedRunningTime="2026-03-18 09:02:52.951997356 +0000 UTC m=+872.537598991" Mar 18 09:02:53.280172 master-0 kubenswrapper[6976]: I0318 09:02:53.280077 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:53.280172 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:53.280172 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:53.280172 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:53.280505 master-0 kubenswrapper[6976]: I0318 09:02:53.280185 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:54.280645 master-0 kubenswrapper[6976]: I0318 09:02:54.280552 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:54.280645 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:54.280645 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:54.280645 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:54.281440 master-0 kubenswrapper[6976]: I0318 09:02:54.281407 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:55.280513 master-0 kubenswrapper[6976]: I0318 09:02:55.280403 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:55.280513 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:55.280513 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:55.280513 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:55.281261 master-0 kubenswrapper[6976]: I0318 09:02:55.280519 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:56.280067 master-0 kubenswrapper[6976]: I0318 09:02:56.279983 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:56.280067 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:56.280067 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:56.280067 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:56.280067 master-0 kubenswrapper[6976]: I0318 09:02:56.280067 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:57.268851 master-0 kubenswrapper[6976]: I0318 09:02:57.268769 6976 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-whh6r container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 18 09:02:57.269130 master-0 kubenswrapper[6976]: I0318 09:02:57.268855 6976 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" podUID="95143c61-6f91-4cd4-9411-31c2fb75d4d0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 18 09:02:57.281044 master-0 kubenswrapper[6976]: I0318 09:02:57.280921 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:57.281044 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:57.281044 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:57.281044 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:57.281044 master-0 kubenswrapper[6976]: I0318 09:02:57.281029 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:57.767102 master-0 kubenswrapper[6976]: I0318 09:02:57.767032 6976 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-whh6r container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 18 09:02:57.767398 master-0 kubenswrapper[6976]: I0318 09:02:57.767128 6976 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" podUID="95143c61-6f91-4cd4-9411-31c2fb75d4d0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 18 09:02:58.280839 master-0 kubenswrapper[6976]: I0318 09:02:58.280746 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:58.280839 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:58.280839 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:58.280839 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:58.281524 master-0 kubenswrapper[6976]: I0318 09:02:58.280847 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:59.004381 master-0 kubenswrapper[6976]: I0318 09:02:59.004284 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:02:59.005054 master-0 kubenswrapper[6976]: I0318 09:02:59.005001 6976 scope.go:117] "RemoveContainer" containerID="3251669fc25c1285249f7f12096305de8608ba905c5c140d1ecff87834122e35" Mar 18 09:02:59.280216 master-0 kubenswrapper[6976]: I0318 09:02:59.280165 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:02:59.280216 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:02:59.280216 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:02:59.280216 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:02:59.280409 master-0 kubenswrapper[6976]: I0318 09:02:59.280218 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:02:59.838370 master-0 kubenswrapper[6976]: I0318 09:02:59.838302 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mcd6d_eb8f3615-9e89-4b51-87a2-7d168c81adf3/cluster-baremetal-operator/2.log" Mar 18 09:02:59.839895 master-0 kubenswrapper[6976]: I0318 09:02:59.839705 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mcd6d_eb8f3615-9e89-4b51-87a2-7d168c81adf3/cluster-baremetal-operator/1.log" Mar 18 09:02:59.840078 master-0 kubenswrapper[6976]: I0318 09:02:59.840047 6976 generic.go:334] "Generic (PLEG): container finished" podID="eb8f3615-9e89-4b51-87a2-7d168c81adf3" containerID="968ae8479a0331117d0f148ecc19dfe89ce58e4b9ba1088bdc7b07d7a970e857" exitCode=1 Mar 18 09:02:59.840140 master-0 kubenswrapper[6976]: I0318 09:02:59.840123 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" event={"ID":"eb8f3615-9e89-4b51-87a2-7d168c81adf3","Type":"ContainerDied","Data":"968ae8479a0331117d0f148ecc19dfe89ce58e4b9ba1088bdc7b07d7a970e857"} Mar 18 09:02:59.840192 master-0 kubenswrapper[6976]: I0318 09:02:59.840170 6976 scope.go:117] "RemoveContainer" containerID="2acf0cea8b1392ffa9520a8d120668aa5dceff5734023e4ff18420eb0b6a71d5" Mar 18 09:02:59.841028 master-0 kubenswrapper[6976]: I0318 09:02:59.841002 6976 scope.go:117] "RemoveContainer" containerID="968ae8479a0331117d0f148ecc19dfe89ce58e4b9ba1088bdc7b07d7a970e857" Mar 18 09:02:59.841248 master-0 kubenswrapper[6976]: E0318 09:02:59.841223 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-6f69995874-mcd6d_openshift-machine-api(eb8f3615-9e89-4b51-87a2-7d168c81adf3)\"" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" podUID="eb8f3615-9e89-4b51-87a2-7d168c81adf3" Mar 18 09:02:59.845684 master-0 kubenswrapper[6976]: I0318 09:02:59.845657 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"4fc555cd68d5d190723bdb906f024eca28a915e20d6010038a593dff24a564cd"} Mar 18 09:03:00.268905 master-0 kubenswrapper[6976]: I0318 09:03:00.268841 6976 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-whh6r container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 18 09:03:00.269375 master-0 kubenswrapper[6976]: I0318 09:03:00.268945 6976 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" podUID="95143c61-6f91-4cd4-9411-31c2fb75d4d0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 18 09:03:00.279777 master-0 kubenswrapper[6976]: I0318 09:03:00.279740 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:00.279777 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:00.279777 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:00.279777 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:00.280098 master-0 kubenswrapper[6976]: I0318 09:03:00.280072 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:00.767599 master-0 kubenswrapper[6976]: I0318 09:03:00.767514 6976 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-whh6r container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 18 09:03:00.767850 master-0 kubenswrapper[6976]: I0318 09:03:00.767638 6976 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" podUID="95143c61-6f91-4cd4-9411-31c2fb75d4d0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 18 09:03:00.854750 master-0 kubenswrapper[6976]: I0318 09:03:00.854719 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mcd6d_eb8f3615-9e89-4b51-87a2-7d168c81adf3/cluster-baremetal-operator/2.log" Mar 18 09:03:01.279531 master-0 kubenswrapper[6976]: I0318 09:03:01.279471 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:01.279531 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:01.279531 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:01.279531 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:01.279957 master-0 kubenswrapper[6976]: I0318 09:03:01.279537 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:01.294970 master-0 kubenswrapper[6976]: I0318 09:03:01.294845 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:01.781466 master-0 kubenswrapper[6976]: I0318 09:03:01.781412 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:02.279472 master-0 kubenswrapper[6976]: I0318 09:03:02.279392 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:02.279472 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:02.279472 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:02.279472 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:02.280765 master-0 kubenswrapper[6976]: I0318 09:03:02.279491 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:03.269242 master-0 kubenswrapper[6976]: I0318 09:03:03.269149 6976 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-whh6r container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 18 09:03:03.269469 master-0 kubenswrapper[6976]: I0318 09:03:03.269246 6976 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" podUID="95143c61-6f91-4cd4-9411-31c2fb75d4d0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 18 09:03:03.280026 master-0 kubenswrapper[6976]: I0318 09:03:03.279949 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:03.280026 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:03.280026 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:03.280026 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:03.280554 master-0 kubenswrapper[6976]: I0318 09:03:03.280063 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:03.565788 master-0 kubenswrapper[6976]: I0318 09:03:03.565619 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:03.572488 master-0 kubenswrapper[6976]: I0318 09:03:03.572415 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:03.767212 master-0 kubenswrapper[6976]: I0318 09:03:03.767120 6976 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-whh6r container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 18 09:03:03.767212 master-0 kubenswrapper[6976]: I0318 09:03:03.767201 6976 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" podUID="95143c61-6f91-4cd4-9411-31c2fb75d4d0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 18 09:03:03.767614 master-0 kubenswrapper[6976]: I0318 09:03:03.767265 6976 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 09:03:03.767989 master-0 kubenswrapper[6976]: I0318 09:03:03.767937 6976 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-whh6r container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 18 09:03:03.768064 master-0 kubenswrapper[6976]: I0318 09:03:03.768009 6976 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" podUID="95143c61-6f91-4cd4-9411-31c2fb75d4d0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 18 09:03:03.768130 master-0 kubenswrapper[6976]: I0318 09:03:03.768103 6976 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"e8cd059870c802ff3fdfa21cb82c57c0674dfa32ec84f5d0c29f5b8b3041ec4d"} pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 18 09:03:03.768202 master-0 kubenswrapper[6976]: I0318 09:03:03.768152 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" podUID="95143c61-6f91-4cd4-9411-31c2fb75d4d0" containerName="openshift-config-operator" containerID="cri-o://e8cd059870c802ff3fdfa21cb82c57c0674dfa32ec84f5d0c29f5b8b3041ec4d" gracePeriod=30 Mar 18 09:03:03.783483 master-0 kubenswrapper[6976]: I0318 09:03:03.783420 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:03.788399 master-0 kubenswrapper[6976]: I0318 09:03:03.788353 6976 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:04.279709 master-0 kubenswrapper[6976]: I0318 09:03:04.279488 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:04.279709 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:04.279709 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:04.279709 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:04.279709 master-0 kubenswrapper[6976]: I0318 09:03:04.279595 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:04.895859 master-0 kubenswrapper[6976]: I0318 09:03:04.895795 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-whh6r_95143c61-6f91-4cd4-9411-31c2fb75d4d0/openshift-config-operator/2.log" Mar 18 09:03:04.897884 master-0 kubenswrapper[6976]: I0318 09:03:04.897817 6976 generic.go:334] "Generic (PLEG): container finished" podID="95143c61-6f91-4cd4-9411-31c2fb75d4d0" containerID="e8cd059870c802ff3fdfa21cb82c57c0674dfa32ec84f5d0c29f5b8b3041ec4d" exitCode=255 Mar 18 09:03:04.897966 master-0 kubenswrapper[6976]: I0318 09:03:04.897884 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" event={"ID":"95143c61-6f91-4cd4-9411-31c2fb75d4d0","Type":"ContainerDied","Data":"e8cd059870c802ff3fdfa21cb82c57c0674dfa32ec84f5d0c29f5b8b3041ec4d"} Mar 18 09:03:04.898019 master-0 kubenswrapper[6976]: I0318 09:03:04.897957 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" event={"ID":"95143c61-6f91-4cd4-9411-31c2fb75d4d0","Type":"ContainerStarted","Data":"a1db4b18e7c1a552609ad2b8ecfe3e77de11692c7bb3daff11a6d317a4758152"} Mar 18 09:03:04.898019 master-0 kubenswrapper[6976]: I0318 09:03:04.897996 6976 scope.go:117] "RemoveContainer" containerID="40b12e3472fb68e00bb6ce887f00cd26e55268f567f01e14fdcd62a66e212074" Mar 18 09:03:04.898319 master-0 kubenswrapper[6976]: I0318 09:03:04.898272 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 09:03:04.901334 master-0 kubenswrapper[6976]: I0318 09:03:04.901277 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qnc62_4e919445-81d0-4663-8941-f596d8121305/snapshot-controller/4.log" Mar 18 09:03:04.901994 master-0 kubenswrapper[6976]: I0318 09:03:04.901953 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qnc62_4e919445-81d0-4663-8941-f596d8121305/snapshot-controller/3.log" Mar 18 09:03:04.902038 master-0 kubenswrapper[6976]: I0318 09:03:04.902007 6976 generic.go:334] "Generic (PLEG): container finished" podID="4e919445-81d0-4663-8941-f596d8121305" containerID="97b6b0922d17ce30a0b9e74a3e377338947d2ced4f3ea98ad7676d4078ee6fa4" exitCode=1 Mar 18 09:03:04.902160 master-0 kubenswrapper[6976]: I0318 09:03:04.902102 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" event={"ID":"4e919445-81d0-4663-8941-f596d8121305","Type":"ContainerDied","Data":"97b6b0922d17ce30a0b9e74a3e377338947d2ced4f3ea98ad7676d4078ee6fa4"} Mar 18 09:03:04.902929 master-0 kubenswrapper[6976]: I0318 09:03:04.902879 6976 scope.go:117] "RemoveContainer" containerID="97b6b0922d17ce30a0b9e74a3e377338947d2ced4f3ea98ad7676d4078ee6fa4" Mar 18 09:03:04.904378 master-0 kubenswrapper[6976]: E0318 09:03:04.904166 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-qnc62_openshift-cluster-storage-operator(4e919445-81d0-4663-8941-f596d8121305)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" podUID="4e919445-81d0-4663-8941-f596d8121305" Mar 18 09:03:04.956904 master-0 kubenswrapper[6976]: I0318 09:03:04.956835 6976 scope.go:117] "RemoveContainer" containerID="8570ba4062451d636e731c51df710a874c0ddb21fcafd404781319bf89550cbc" Mar 18 09:03:05.280552 master-0 kubenswrapper[6976]: I0318 09:03:05.280429 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:05.280552 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:05.280552 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:05.280552 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:05.280552 master-0 kubenswrapper[6976]: I0318 09:03:05.280516 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:05.599111 master-0 kubenswrapper[6976]: I0318 09:03:05.598986 6976 scope.go:117] "RemoveContainer" containerID="0914d593ef977819ab7dc6ab7b6e7409b6b30b0704e79df20d36fbbb266a5b50" Mar 18 09:03:05.599429 master-0 kubenswrapper[6976]: E0318 09:03:05.599350 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-4cxfh_openshift-ingress-operator(bf7a3329-a04c-4b58-9364-b907c00cbe08)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" podUID="bf7a3329-a04c-4b58-9364-b907c00cbe08" Mar 18 09:03:05.913917 master-0 kubenswrapper[6976]: I0318 09:03:05.913784 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-whh6r_95143c61-6f91-4cd4-9411-31c2fb75d4d0/openshift-config-operator/2.log" Mar 18 09:03:05.917317 master-0 kubenswrapper[6976]: I0318 09:03:05.917272 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qnc62_4e919445-81d0-4663-8941-f596d8121305/snapshot-controller/4.log" Mar 18 09:03:06.286600 master-0 kubenswrapper[6976]: I0318 09:03:06.282558 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:06.286600 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:06.286600 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:06.286600 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:06.286600 master-0 kubenswrapper[6976]: I0318 09:03:06.282698 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:07.280291 master-0 kubenswrapper[6976]: I0318 09:03:07.280133 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:07.280291 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:07.280291 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:07.280291 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:07.280291 master-0 kubenswrapper[6976]: I0318 09:03:07.280246 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:08.280493 master-0 kubenswrapper[6976]: I0318 09:03:08.280389 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:08.280493 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:08.280493 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:08.280493 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:08.281166 master-0 kubenswrapper[6976]: I0318 09:03:08.280527 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:09.269406 master-0 kubenswrapper[6976]: I0318 09:03:09.269298 6976 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-whh6r container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 18 09:03:09.269406 master-0 kubenswrapper[6976]: I0318 09:03:09.269395 6976 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" podUID="95143c61-6f91-4cd4-9411-31c2fb75d4d0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 18 09:03:09.279716 master-0 kubenswrapper[6976]: I0318 09:03:09.279640 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:09.279716 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:09.279716 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:09.279716 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:09.280200 master-0 kubenswrapper[6976]: I0318 09:03:09.279720 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:09.766725 master-0 kubenswrapper[6976]: I0318 09:03:09.766639 6976 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-whh6r container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 18 09:03:09.767659 master-0 kubenswrapper[6976]: I0318 09:03:09.766746 6976 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" podUID="95143c61-6f91-4cd4-9411-31c2fb75d4d0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 18 09:03:10.279978 master-0 kubenswrapper[6976]: I0318 09:03:10.279887 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:10.279978 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:10.279978 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:10.279978 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:10.280460 master-0 kubenswrapper[6976]: I0318 09:03:10.280001 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:11.280370 master-0 kubenswrapper[6976]: I0318 09:03:11.280307 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:11.280370 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:11.280370 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:11.280370 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:11.280370 master-0 kubenswrapper[6976]: I0318 09:03:11.280372 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:12.661330 master-0 kubenswrapper[6976]: I0318 09:03:11.598732 6976 scope.go:117] "RemoveContainer" containerID="968ae8479a0331117d0f148ecc19dfe89ce58e4b9ba1088bdc7b07d7a970e857" Mar 18 09:03:12.661330 master-0 kubenswrapper[6976]: E0318 09:03:11.599196 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-6f69995874-mcd6d_openshift-machine-api(eb8f3615-9e89-4b51-87a2-7d168c81adf3)\"" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" podUID="eb8f3615-9e89-4b51-87a2-7d168c81adf3" Mar 18 09:03:12.670558 master-0 kubenswrapper[6976]: I0318 09:03:12.670479 6976 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-whh6r container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" start-of-body= Mar 18 09:03:12.670957 master-0 kubenswrapper[6976]: I0318 09:03:12.670638 6976 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" podUID="95143c61-6f91-4cd4-9411-31c2fb75d4d0" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.21:8443/healthz\": dial tcp 10.128.0.21:8443: connect: connection refused" Mar 18 09:03:12.673292 master-0 kubenswrapper[6976]: I0318 09:03:12.673239 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:12.673292 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:12.673292 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:12.673292 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:12.673496 master-0 kubenswrapper[6976]: I0318 09:03:12.673301 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:13.283360 master-0 kubenswrapper[6976]: I0318 09:03:13.283322 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:13.283360 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:13.283360 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:13.283360 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:13.283720 master-0 kubenswrapper[6976]: I0318 09:03:13.283396 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:13.571670 master-0 kubenswrapper[6976]: I0318 09:03:13.571517 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:14.280517 master-0 kubenswrapper[6976]: I0318 09:03:14.280436 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:14.280517 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:14.280517 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:14.280517 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:14.281651 master-0 kubenswrapper[6976]: I0318 09:03:14.280513 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:14.983252 master-0 kubenswrapper[6976]: E0318 09:03:14.983055 6976 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189de35495c0fbe6 kube-system 8779 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 08:49:40 +0000 UTC,LastTimestamp:2026-03-18 08:58:35.931464844 +0000 UTC m=+615.517066439,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:03:15.274486 master-0 kubenswrapper[6976]: I0318 09:03:15.274292 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 09:03:15.279899 master-0 kubenswrapper[6976]: I0318 09:03:15.279816 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:15.279899 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:15.279899 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:15.279899 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:15.280606 master-0 kubenswrapper[6976]: I0318 09:03:15.279919 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:16.279732 master-0 kubenswrapper[6976]: I0318 09:03:16.279683 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:16.279732 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:16.279732 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:16.279732 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:16.280247 master-0 kubenswrapper[6976]: I0318 09:03:16.279746 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:17.280038 master-0 kubenswrapper[6976]: I0318 09:03:17.279966 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:17.280038 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:17.280038 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:17.280038 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:17.280038 master-0 kubenswrapper[6976]: I0318 09:03:17.280037 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:18.283278 master-0 kubenswrapper[6976]: I0318 09:03:18.283173 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:18.283278 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:18.283278 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:18.283278 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:18.284505 master-0 kubenswrapper[6976]: I0318 09:03:18.283285 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:19.279445 master-0 kubenswrapper[6976]: I0318 09:03:19.279394 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:19.279445 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:19.279445 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:19.279445 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:19.279818 master-0 kubenswrapper[6976]: I0318 09:03:19.279462 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:20.279726 master-0 kubenswrapper[6976]: I0318 09:03:20.279654 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:20.279726 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:20.279726 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:20.279726 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:20.279726 master-0 kubenswrapper[6976]: I0318 09:03:20.279721 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:20.603044 master-0 kubenswrapper[6976]: I0318 09:03:20.602895 6976 scope.go:117] "RemoveContainer" containerID="97b6b0922d17ce30a0b9e74a3e377338947d2ced4f3ea98ad7676d4078ee6fa4" Mar 18 09:03:20.603240 master-0 kubenswrapper[6976]: E0318 09:03:20.603157 6976 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-qnc62_openshift-cluster-storage-operator(4e919445-81d0-4663-8941-f596d8121305)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" podUID="4e919445-81d0-4663-8941-f596d8121305" Mar 18 09:03:20.603419 master-0 kubenswrapper[6976]: I0318 09:03:20.603375 6976 scope.go:117] "RemoveContainer" containerID="0914d593ef977819ab7dc6ab7b6e7409b6b30b0704e79df20d36fbbb266a5b50" Mar 18 09:03:21.280027 master-0 kubenswrapper[6976]: I0318 09:03:21.279948 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:21.280027 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:21.280027 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:21.280027 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:21.281174 master-0 kubenswrapper[6976]: I0318 09:03:21.280059 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:21.748447 master-0 kubenswrapper[6976]: I0318 09:03:21.748396 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-4cxfh_bf7a3329-a04c-4b58-9364-b907c00cbe08/ingress-operator/4.log" Mar 18 09:03:21.748880 master-0 kubenswrapper[6976]: I0318 09:03:21.748843 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" event={"ID":"bf7a3329-a04c-4b58-9364-b907c00cbe08","Type":"ContainerStarted","Data":"8325eb97aabfc9906adab4d31d5263215c8fd5e81b00bac25c9c39c574dae63e"} Mar 18 09:03:22.280097 master-0 kubenswrapper[6976]: I0318 09:03:22.280011 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:22.280097 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:22.280097 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:22.280097 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:22.280801 master-0 kubenswrapper[6976]: I0318 09:03:22.280119 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:23.280451 master-0 kubenswrapper[6976]: I0318 09:03:23.280333 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:23.280451 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:23.280451 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:23.280451 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:23.280451 master-0 kubenswrapper[6976]: I0318 09:03:23.280424 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:24.281118 master-0 kubenswrapper[6976]: I0318 09:03:24.281003 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:24.281118 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:24.281118 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:24.281118 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:24.281118 master-0 kubenswrapper[6976]: I0318 09:03:24.281087 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:25.279867 master-0 kubenswrapper[6976]: I0318 09:03:25.279794 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:25.279867 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:25.279867 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:25.279867 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:25.280323 master-0 kubenswrapper[6976]: I0318 09:03:25.279885 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:25.759604 master-0 kubenswrapper[6976]: I0318 09:03:25.759534 6976 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 09:03:26.283683 master-0 kubenswrapper[6976]: I0318 09:03:26.283626 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:26.283683 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:26.283683 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:26.283683 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:26.283962 master-0 kubenswrapper[6976]: I0318 09:03:26.283705 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:26.598261 master-0 kubenswrapper[6976]: I0318 09:03:26.598163 6976 scope.go:117] "RemoveContainer" containerID="968ae8479a0331117d0f148ecc19dfe89ce58e4b9ba1088bdc7b07d7a970e857" Mar 18 09:03:26.812693 master-0 kubenswrapper[6976]: I0318 09:03:26.812637 6976 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mcd6d_eb8f3615-9e89-4b51-87a2-7d168c81adf3/cluster-baremetal-operator/2.log" Mar 18 09:03:26.813512 master-0 kubenswrapper[6976]: I0318 09:03:26.813478 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" event={"ID":"eb8f3615-9e89-4b51-87a2-7d168c81adf3","Type":"ContainerStarted","Data":"eef0fe6d8668da55d536ced87f0f00cdd0ca32f25d59cba284a200515a406a4b"} Mar 18 09:03:27.014955 master-0 kubenswrapper[6976]: I0318 09:03:27.014913 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-retry-1-master-0"] Mar 18 09:03:27.015984 master-0 kubenswrapper[6976]: I0318 09:03:27.015963 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 18 09:03:27.021614 master-0 kubenswrapper[6976]: I0318 09:03:27.018611 6976 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-jw7t8" Mar 18 09:03:27.021614 master-0 kubenswrapper[6976]: I0318 09:03:27.018754 6976 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 18 09:03:27.040320 master-0 kubenswrapper[6976]: I0318 09:03:27.040250 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-retry-1-master-0"] Mar 18 09:03:27.173180 master-0 kubenswrapper[6976]: I0318 09:03:27.173110 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2af879e-1465-40bf-bf72-30c7e89386a3-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"e2af879e-1465-40bf-bf72-30c7e89386a3\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 18 09:03:27.173478 master-0 kubenswrapper[6976]: I0318 09:03:27.173376 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2af879e-1465-40bf-bf72-30c7e89386a3-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"e2af879e-1465-40bf-bf72-30c7e89386a3\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 18 09:03:27.173478 master-0 kubenswrapper[6976]: I0318 09:03:27.173421 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2af879e-1465-40bf-bf72-30c7e89386a3-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"e2af879e-1465-40bf-bf72-30c7e89386a3\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 18 09:03:27.274605 master-0 kubenswrapper[6976]: I0318 09:03:27.274450 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2af879e-1465-40bf-bf72-30c7e89386a3-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"e2af879e-1465-40bf-bf72-30c7e89386a3\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 18 09:03:27.274605 master-0 kubenswrapper[6976]: I0318 09:03:27.274606 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2af879e-1465-40bf-bf72-30c7e89386a3-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"e2af879e-1465-40bf-bf72-30c7e89386a3\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 18 09:03:27.274856 master-0 kubenswrapper[6976]: I0318 09:03:27.274618 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2af879e-1465-40bf-bf72-30c7e89386a3-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"e2af879e-1465-40bf-bf72-30c7e89386a3\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 18 09:03:27.274856 master-0 kubenswrapper[6976]: I0318 09:03:27.274634 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2af879e-1465-40bf-bf72-30c7e89386a3-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"e2af879e-1465-40bf-bf72-30c7e89386a3\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 18 09:03:27.274856 master-0 kubenswrapper[6976]: I0318 09:03:27.274670 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2af879e-1465-40bf-bf72-30c7e89386a3-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"e2af879e-1465-40bf-bf72-30c7e89386a3\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 18 09:03:27.288998 master-0 kubenswrapper[6976]: I0318 09:03:27.287577 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:27.288998 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:27.288998 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:27.288998 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:27.288998 master-0 kubenswrapper[6976]: I0318 09:03:27.287646 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:27.305408 master-0 kubenswrapper[6976]: I0318 09:03:27.305349 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2af879e-1465-40bf-bf72-30c7e89386a3-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"e2af879e-1465-40bf-bf72-30c7e89386a3\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 18 09:03:27.334669 master-0 kubenswrapper[6976]: I0318 09:03:27.331422 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 18 09:03:27.801996 master-0 kubenswrapper[6976]: W0318 09:03:27.801959 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode2af879e_1465_40bf_bf72_30c7e89386a3.slice/crio-0736cc0a2848f729264829f385cc50d1c800409917495f6cb40f4213a06e4f6a WatchSource:0}: Error finding container 0736cc0a2848f729264829f385cc50d1c800409917495f6cb40f4213a06e4f6a: Status 404 returned error can't find the container with id 0736cc0a2848f729264829f385cc50d1c800409917495f6cb40f4213a06e4f6a Mar 18 09:03:27.803172 master-0 kubenswrapper[6976]: I0318 09:03:27.803114 6976 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-retry-1-master-0"] Mar 18 09:03:27.827585 master-0 kubenswrapper[6976]: I0318 09:03:27.826327 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"e2af879e-1465-40bf-bf72-30c7e89386a3","Type":"ContainerStarted","Data":"0736cc0a2848f729264829f385cc50d1c800409917495f6cb40f4213a06e4f6a"} Mar 18 09:03:28.279673 master-0 kubenswrapper[6976]: I0318 09:03:28.279237 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:28.279673 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:28.279673 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:28.279673 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:28.279673 master-0 kubenswrapper[6976]: I0318 09:03:28.279338 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:28.843953 master-0 kubenswrapper[6976]: I0318 09:03:28.841938 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"e2af879e-1465-40bf-bf72-30c7e89386a3","Type":"ContainerStarted","Data":"96f265b2997fc8f98bf93a3602e88baaf10a3bddac7d7468686ac08fed98ccb6"} Mar 18 09:03:29.280659 master-0 kubenswrapper[6976]: I0318 09:03:29.280557 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:29.280659 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:29.280659 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:29.280659 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:29.280659 master-0 kubenswrapper[6976]: I0318 09:03:29.280644 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:29.554326 master-0 kubenswrapper[6976]: I0318 09:03:29.554213 6976 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 09:03:29.555409 master-0 kubenswrapper[6976]: I0318 09:03:29.555371 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:29.592786 master-0 kubenswrapper[6976]: I0318 09:03:29.592657 6976 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" podStartSLOduration=2.592632897 podStartE2EDuration="2.592632897s" podCreationTimestamp="2026-03-18 09:03:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:03:28.87484197 +0000 UTC m=+908.460443575" watchObservedRunningTime="2026-03-18 09:03:29.592632897 +0000 UTC m=+909.178234492" Mar 18 09:03:29.593165 master-0 kubenswrapper[6976]: I0318 09:03:29.593134 6976 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 09:03:29.620688 master-0 kubenswrapper[6976]: I0318 09:03:29.620634 6976 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 18 09:03:29.620918 master-0 kubenswrapper[6976]: I0318 09:03:29.620890 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" containerID="cri-o://e66d51cf8147f2ef1dd8f8cd73d79140962d6bcce6a8aaa4c5456711dcd4f71a" gracePeriod=15 Mar 18 09:03:29.621001 master-0 kubenswrapper[6976]: I0318 09:03:29.620956 6976 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5a3bd52bc46563d9e0f440951b976daa40dee6ea05c0ee56171ddc976c094e95" gracePeriod=15 Mar 18 09:03:29.622074 master-0 kubenswrapper[6976]: I0318 09:03:29.622041 6976 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 09:03:29.622383 master-0 kubenswrapper[6976]: E0318 09:03:29.622338 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 09:03:29.622383 master-0 kubenswrapper[6976]: I0318 09:03:29.622360 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 09:03:29.622508 master-0 kubenswrapper[6976]: E0318 09:03:29.622395 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 09:03:29.622508 master-0 kubenswrapper[6976]: I0318 09:03:29.622404 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 09:03:29.622508 master-0 kubenswrapper[6976]: E0318 09:03:29.622425 6976 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 09:03:29.622508 master-0 kubenswrapper[6976]: I0318 09:03:29.622431 6976 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 09:03:29.622728 master-0 kubenswrapper[6976]: I0318 09:03:29.622538 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 09:03:29.622728 master-0 kubenswrapper[6976]: I0318 09:03:29.622554 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 09:03:29.622728 master-0 kubenswrapper[6976]: I0318 09:03:29.622586 6976 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 09:03:29.625191 master-0 kubenswrapper[6976]: I0318 09:03:29.625145 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:29.667102 master-0 kubenswrapper[6976]: E0318 09:03:29.667059 6976 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:29.712219 master-0 kubenswrapper[6976]: I0318 09:03:29.712168 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:29.712219 master-0 kubenswrapper[6976]: I0318 09:03:29.712227 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:29.712507 master-0 kubenswrapper[6976]: I0318 09:03:29.712268 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:29.712507 master-0 kubenswrapper[6976]: I0318 09:03:29.712306 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:29.712607 master-0 kubenswrapper[6976]: I0318 09:03:29.712530 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:29.782420 master-0 kubenswrapper[6976]: E0318 09:03:29.782341 6976 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podc46fcf39_9167_4ec2_9d2c_0a622bc69d13.slice/crio-ee10cfeeb8c93ff8e40f81f0386b22a513e8b6ef1f61583ef7f0a572ddbf099a.scope\": RecentStats: unable to find data in memory cache]" Mar 18 09:03:29.813981 master-0 kubenswrapper[6976]: I0318 09:03:29.813831 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:29.813981 master-0 kubenswrapper[6976]: I0318 09:03:29.813927 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:29.813981 master-0 kubenswrapper[6976]: I0318 09:03:29.813961 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"ac3507630eeeca1ec26dca5ed036e3bb\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:29.814319 master-0 kubenswrapper[6976]: I0318 09:03:29.813967 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:29.814319 master-0 kubenswrapper[6976]: I0318 09:03:29.814043 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:29.814319 master-0 kubenswrapper[6976]: I0318 09:03:29.814001 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:29.814319 master-0 kubenswrapper[6976]: I0318 09:03:29.814190 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"ac3507630eeeca1ec26dca5ed036e3bb\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:29.814319 master-0 kubenswrapper[6976]: I0318 09:03:29.814183 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:29.814319 master-0 kubenswrapper[6976]: I0318 09:03:29.814299 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:29.814860 master-0 kubenswrapper[6976]: I0318 09:03:29.814427 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:29.814860 master-0 kubenswrapper[6976]: I0318 09:03:29.814529 6976 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"ac3507630eeeca1ec26dca5ed036e3bb\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:29.814860 master-0 kubenswrapper[6976]: I0318 09:03:29.814615 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:29.814860 master-0 kubenswrapper[6976]: I0318 09:03:29.814695 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:29.852619 master-0 kubenswrapper[6976]: I0318 09:03:29.852520 6976 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="5a3bd52bc46563d9e0f440951b976daa40dee6ea05c0ee56171ddc976c094e95" exitCode=0 Mar 18 09:03:29.854524 master-0 kubenswrapper[6976]: I0318 09:03:29.854466 6976 generic.go:334] "Generic (PLEG): container finished" podID="c46fcf39-9167-4ec2-9d2c-0a622bc69d13" containerID="ee10cfeeb8c93ff8e40f81f0386b22a513e8b6ef1f61583ef7f0a572ddbf099a" exitCode=0 Mar 18 09:03:29.854695 master-0 kubenswrapper[6976]: I0318 09:03:29.854601 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-2-master-0" event={"ID":"c46fcf39-9167-4ec2-9d2c-0a622bc69d13","Type":"ContainerDied","Data":"ee10cfeeb8c93ff8e40f81f0386b22a513e8b6ef1f61583ef7f0a572ddbf099a"} Mar 18 09:03:29.855937 master-0 kubenswrapper[6976]: I0318 09:03:29.855869 6976 status_manager.go:851] "Failed to get status for pod" podUID="c46fcf39-9167-4ec2-9d2c-0a622bc69d13" pod="openshift-kube-apiserver/installer-1-retry-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:03:29.887416 master-0 kubenswrapper[6976]: I0318 09:03:29.887351 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:29.916215 master-0 kubenswrapper[6976]: I0318 09:03:29.916154 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"ac3507630eeeca1ec26dca5ed036e3bb\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:29.916215 master-0 kubenswrapper[6976]: I0318 09:03:29.916202 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"ac3507630eeeca1ec26dca5ed036e3bb\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:29.916435 master-0 kubenswrapper[6976]: I0318 09:03:29.916274 6976 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"ac3507630eeeca1ec26dca5ed036e3bb\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:29.916435 master-0 kubenswrapper[6976]: I0318 09:03:29.916347 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"ac3507630eeeca1ec26dca5ed036e3bb\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:29.916435 master-0 kubenswrapper[6976]: I0318 09:03:29.916387 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"ac3507630eeeca1ec26dca5ed036e3bb\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:29.916435 master-0 kubenswrapper[6976]: I0318 09:03:29.916410 6976 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"ac3507630eeeca1ec26dca5ed036e3bb\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:29.920797 master-0 kubenswrapper[6976]: W0318 09:03:29.920677 6976 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95378a840215d5780aa88df876aac909.slice/crio-bfd8810c464d77aec01f793c5157b8c1f1263372b617d164c28d17b0fec09dfe WatchSource:0}: Error finding container bfd8810c464d77aec01f793c5157b8c1f1263372b617d164c28d17b0fec09dfe: Status 404 returned error can't find the container with id bfd8810c464d77aec01f793c5157b8c1f1263372b617d164c28d17b0fec09dfe Mar 18 09:03:29.928020 master-0 kubenswrapper[6976]: E0318 09:03:29.927845 6976 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189de415a87ac1fe openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:95378a840215d5780aa88df876aac909,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:03:29.925906942 +0000 UTC m=+909.511508577,LastTimestamp:2026-03-18 09:03:29.925906942 +0000 UTC m=+909.511508577,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:03:29.968133 master-0 kubenswrapper[6976]: I0318 09:03:29.967942 6976 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:30.279557 master-0 kubenswrapper[6976]: I0318 09:03:30.279517 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:30.279557 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:30.279557 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:30.279557 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:30.279806 master-0 kubenswrapper[6976]: I0318 09:03:30.279584 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:30.618367 master-0 kubenswrapper[6976]: I0318 09:03:30.618275 6976 status_manager.go:851] "Failed to get status for pod" podUID="c46fcf39-9167-4ec2-9d2c-0a622bc69d13" pod="openshift-kube-apiserver/installer-1-retry-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:03:30.868111 master-0 kubenswrapper[6976]: I0318 09:03:30.868005 6976 generic.go:334] "Generic (PLEG): container finished" podID="ac3507630eeeca1ec26dca5ed036e3bb" containerID="6334e1fed827ddd985e374f2cd49bc8670ca90ca11daa38cf82b7dd454965666" exitCode=0 Mar 18 09:03:30.869057 master-0 kubenswrapper[6976]: I0318 09:03:30.868136 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ac3507630eeeca1ec26dca5ed036e3bb","Type":"ContainerDied","Data":"6334e1fed827ddd985e374f2cd49bc8670ca90ca11daa38cf82b7dd454965666"} Mar 18 09:03:30.869057 master-0 kubenswrapper[6976]: I0318 09:03:30.868185 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ac3507630eeeca1ec26dca5ed036e3bb","Type":"ContainerStarted","Data":"13a068e44f036eb5ea2827a8a27172c655290a87fa0428a7b71b67b8505f2fbb"} Mar 18 09:03:30.870028 master-0 kubenswrapper[6976]: E0318 09:03:30.869940 6976 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:30.870165 master-0 kubenswrapper[6976]: I0318 09:03:30.870022 6976 status_manager.go:851] "Failed to get status for pod" podUID="c46fcf39-9167-4ec2-9d2c-0a622bc69d13" pod="openshift-kube-apiserver/installer-1-retry-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:03:30.870870 master-0 kubenswrapper[6976]: I0318 09:03:30.870813 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"95378a840215d5780aa88df876aac909","Type":"ContainerStarted","Data":"c361cbba945001e9baf7ce5c31f92c9a1b2e62ac88d976a094c24336f0593c2e"} Mar 18 09:03:30.870995 master-0 kubenswrapper[6976]: I0318 09:03:30.870890 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"95378a840215d5780aa88df876aac909","Type":"ContainerStarted","Data":"bfd8810c464d77aec01f793c5157b8c1f1263372b617d164c28d17b0fec09dfe"} Mar 18 09:03:30.873611 master-0 kubenswrapper[6976]: I0318 09:03:30.873503 6976 status_manager.go:851] "Failed to get status for pod" podUID="95378a840215d5780aa88df876aac909" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:03:30.874618 master-0 kubenswrapper[6976]: I0318 09:03:30.874464 6976 status_manager.go:851] "Failed to get status for pod" podUID="c46fcf39-9167-4ec2-9d2c-0a622bc69d13" pod="openshift-kube-apiserver/installer-1-retry-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:03:31.186502 master-0 kubenswrapper[6976]: I0318 09:03:31.186462 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:03:31.189100 master-0 kubenswrapper[6976]: I0318 09:03:31.189052 6976 status_manager.go:851] "Failed to get status for pod" podUID="c46fcf39-9167-4ec2-9d2c-0a622bc69d13" pod="openshift-kube-apiserver/installer-1-retry-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:03:31.189711 master-0 kubenswrapper[6976]: I0318 09:03:31.189678 6976 status_manager.go:851] "Failed to get status for pod" podUID="95378a840215d5780aa88df876aac909" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:03:31.280332 master-0 kubenswrapper[6976]: I0318 09:03:31.280256 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:31.280332 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:31.280332 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:31.280332 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:31.280954 master-0 kubenswrapper[6976]: I0318 09:03:31.280366 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:31.339758 master-0 kubenswrapper[6976]: I0318 09:03:31.339488 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-var-lock\") pod \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " Mar 18 09:03:31.339758 master-0 kubenswrapper[6976]: I0318 09:03:31.339606 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-var-lock" (OuterVolumeSpecName: "var-lock") pod "c46fcf39-9167-4ec2-9d2c-0a622bc69d13" (UID: "c46fcf39-9167-4ec2-9d2c-0a622bc69d13"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:03:31.339758 master-0 kubenswrapper[6976]: I0318 09:03:31.339644 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kubelet-dir\") pod \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " Mar 18 09:03:31.339758 master-0 kubenswrapper[6976]: I0318 09:03:31.339698 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access\") pod \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " Mar 18 09:03:31.340337 master-0 kubenswrapper[6976]: I0318 09:03:31.339767 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c46fcf39-9167-4ec2-9d2c-0a622bc69d13" (UID: "c46fcf39-9167-4ec2-9d2c-0a622bc69d13"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:03:31.340337 master-0 kubenswrapper[6976]: I0318 09:03:31.340053 6976 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:31.340337 master-0 kubenswrapper[6976]: I0318 09:03:31.340071 6976 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:31.349780 master-0 kubenswrapper[6976]: I0318 09:03:31.349715 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c46fcf39-9167-4ec2-9d2c-0a622bc69d13" (UID: "c46fcf39-9167-4ec2-9d2c-0a622bc69d13"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:03:31.441120 master-0 kubenswrapper[6976]: I0318 09:03:31.441066 6976 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:31.891588 master-0 kubenswrapper[6976]: I0318 09:03:31.891393 6976 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="e66d51cf8147f2ef1dd8f8cd73d79140962d6bcce6a8aaa4c5456711dcd4f71a" exitCode=0 Mar 18 09:03:31.891588 master-0 kubenswrapper[6976]: I0318 09:03:31.891470 6976 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9783b2b210b8a83d070b181fdbb2da8ba234da764756739e3354c4aa2f2e32b" Mar 18 09:03:31.895663 master-0 kubenswrapper[6976]: I0318 09:03:31.893797 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ac3507630eeeca1ec26dca5ed036e3bb","Type":"ContainerStarted","Data":"907b93d88590538b76b9f3e15155463f097093d928671e9cdef117547df54cfc"} Mar 18 09:03:31.895663 master-0 kubenswrapper[6976]: I0318 09:03:31.893840 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ac3507630eeeca1ec26dca5ed036e3bb","Type":"ContainerStarted","Data":"51c70143b4526577c0b7ded62575d5c728e020841c7047685596b1ab541784be"} Mar 18 09:03:31.895663 master-0 kubenswrapper[6976]: I0318 09:03:31.893854 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ac3507630eeeca1ec26dca5ed036e3bb","Type":"ContainerStarted","Data":"f7758a746eb691fed619094369d219bf54bedd1f477e550165239c0a34de5c0f"} Mar 18 09:03:31.896896 master-0 kubenswrapper[6976]: I0318 09:03:31.896865 6976 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-2-master-0" event={"ID":"c46fcf39-9167-4ec2-9d2c-0a622bc69d13","Type":"ContainerDied","Data":"181944668b8a2ce83ab0c8df1ad74ddf1e053adffb02e319eb1d45759d68acf0"} Mar 18 09:03:31.896955 master-0 kubenswrapper[6976]: I0318 09:03:31.896926 6976 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="181944668b8a2ce83ab0c8df1ad74ddf1e053adffb02e319eb1d45759d68acf0" Mar 18 09:03:31.896955 master-0 kubenswrapper[6976]: I0318 09:03:31.896874 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:03:31.937786 master-0 kubenswrapper[6976]: I0318 09:03:31.937656 6976 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 18 09:03:32.050414 master-0 kubenswrapper[6976]: I0318 09:03:32.050347 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 18 09:03:32.050672 master-0 kubenswrapper[6976]: I0318 09:03:32.050435 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 18 09:03:32.050672 master-0 kubenswrapper[6976]: I0318 09:03:32.050458 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 18 09:03:32.050672 master-0 kubenswrapper[6976]: I0318 09:03:32.050454 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config" (OuterVolumeSpecName: "config") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:03:32.050672 master-0 kubenswrapper[6976]: I0318 09:03:32.050488 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 18 09:03:32.050672 master-0 kubenswrapper[6976]: I0318 09:03:32.050528 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:03:32.050672 master-0 kubenswrapper[6976]: I0318 09:03:32.050540 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 18 09:03:32.050672 master-0 kubenswrapper[6976]: I0318 09:03:32.050555 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs" (OuterVolumeSpecName: "logs") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:03:32.050672 master-0 kubenswrapper[6976]: I0318 09:03:32.050578 6976 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 18 09:03:32.050672 master-0 kubenswrapper[6976]: I0318 09:03:32.050596 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:03:32.050672 master-0 kubenswrapper[6976]: I0318 09:03:32.050623 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets" (OuterVolumeSpecName: "secrets") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:03:32.051042 master-0 kubenswrapper[6976]: I0318 09:03:32.050785 6976 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:32.051042 master-0 kubenswrapper[6976]: I0318 09:03:32.050796 6976 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:32.051042 master-0 kubenswrapper[6976]: I0318 09:03:32.050808 6976 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:32.051042 master-0 kubenswrapper[6976]: I0318 09:03:32.050816 6976 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:32.051042 master-0 kubenswrapper[6976]: I0318 09:03:32.050861 6976 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:32.051042 master-0 kubenswrapper[6976]: I0318 09:03:32.050900 6976 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:03:32.152789 master-0 kubenswrapper[6976]: I0318 09:03:32.152723 6976 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:32.283996 master-0 kubenswrapper[6976]: I0318 09:03:32.281307 6976 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-sgsmn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 18 09:03:32.283996 master-0 kubenswrapper[6976]: [-]has-synced failed: reason withheld Mar 18 09:03:32.283996 master-0 kubenswrapper[6976]: [+]process-running ok Mar 18 09:03:32.283996 master-0 kubenswrapper[6976]: healthz check failed Mar 18 09:03:32.283996 master-0 kubenswrapper[6976]: I0318 09:03:32.281388 6976 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" podUID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:32.400509 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 18 09:03:32.438202 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 18 09:03:32.438459 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 18 09:03:32.439362 master-0 systemd[1]: kubelet.service: Consumed 2min 6.869s CPU time. Mar 18 09:03:32.453124 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 18 09:03:32.565177 master-0 kubenswrapper[26053]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 09:03:32.565177 master-0 kubenswrapper[26053]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 18 09:03:32.565177 master-0 kubenswrapper[26053]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 09:03:32.566199 master-0 kubenswrapper[26053]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 09:03:32.566199 master-0 kubenswrapper[26053]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 18 09:03:32.566199 master-0 kubenswrapper[26053]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 18 09:03:32.566199 master-0 kubenswrapper[26053]: I0318 09:03:32.565360 26053 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 18 09:03:32.567980 master-0 kubenswrapper[26053]: W0318 09:03:32.567948 26053 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 09:03:32.567980 master-0 kubenswrapper[26053]: W0318 09:03:32.567967 26053 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 09:03:32.567980 master-0 kubenswrapper[26053]: W0318 09:03:32.567972 26053 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 09:03:32.567980 master-0 kubenswrapper[26053]: W0318 09:03:32.567977 26053 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 09:03:32.567980 master-0 kubenswrapper[26053]: W0318 09:03:32.567981 26053 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 09:03:32.567980 master-0 kubenswrapper[26053]: W0318 09:03:32.567985 26053 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 09:03:32.567980 master-0 kubenswrapper[26053]: W0318 09:03:32.567989 26053 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 09:03:32.568281 master-0 kubenswrapper[26053]: W0318 09:03:32.567994 26053 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 09:03:32.568281 master-0 kubenswrapper[26053]: W0318 09:03:32.567999 26053 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 09:03:32.568281 master-0 kubenswrapper[26053]: W0318 09:03:32.568004 26053 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 09:03:32.568281 master-0 kubenswrapper[26053]: W0318 09:03:32.568028 26053 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 09:03:32.568281 master-0 kubenswrapper[26053]: W0318 09:03:32.568033 26053 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 09:03:32.568281 master-0 kubenswrapper[26053]: W0318 09:03:32.568038 26053 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 09:03:32.568281 master-0 kubenswrapper[26053]: W0318 09:03:32.568041 26053 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 09:03:32.568281 master-0 kubenswrapper[26053]: W0318 09:03:32.568045 26053 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 09:03:32.568281 master-0 kubenswrapper[26053]: W0318 09:03:32.568049 26053 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 09:03:32.568281 master-0 kubenswrapper[26053]: W0318 09:03:32.568052 26053 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 09:03:32.568281 master-0 kubenswrapper[26053]: W0318 09:03:32.568056 26053 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 09:03:32.568281 master-0 kubenswrapper[26053]: W0318 09:03:32.568060 26053 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 09:03:32.568281 master-0 kubenswrapper[26053]: W0318 09:03:32.568064 26053 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 09:03:32.568281 master-0 kubenswrapper[26053]: W0318 09:03:32.568069 26053 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 09:03:32.568281 master-0 kubenswrapper[26053]: W0318 09:03:32.568073 26053 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 09:03:32.568281 master-0 kubenswrapper[26053]: W0318 09:03:32.568077 26053 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 09:03:32.568281 master-0 kubenswrapper[26053]: W0318 09:03:32.568081 26053 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 09:03:32.568281 master-0 kubenswrapper[26053]: W0318 09:03:32.568086 26053 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 09:03:32.568281 master-0 kubenswrapper[26053]: W0318 09:03:32.568090 26053 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 09:03:32.569000 master-0 kubenswrapper[26053]: W0318 09:03:32.568094 26053 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 09:03:32.569000 master-0 kubenswrapper[26053]: W0318 09:03:32.568098 26053 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 09:03:32.569000 master-0 kubenswrapper[26053]: W0318 09:03:32.568102 26053 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 09:03:32.569000 master-0 kubenswrapper[26053]: W0318 09:03:32.568111 26053 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 09:03:32.569000 master-0 kubenswrapper[26053]: W0318 09:03:32.568115 26053 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 09:03:32.569000 master-0 kubenswrapper[26053]: W0318 09:03:32.568119 26053 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 09:03:32.569000 master-0 kubenswrapper[26053]: W0318 09:03:32.568123 26053 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 09:03:32.569000 master-0 kubenswrapper[26053]: W0318 09:03:32.568126 26053 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 09:03:32.569000 master-0 kubenswrapper[26053]: W0318 09:03:32.568130 26053 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 09:03:32.569000 master-0 kubenswrapper[26053]: W0318 09:03:32.568134 26053 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 09:03:32.569000 master-0 kubenswrapper[26053]: W0318 09:03:32.568137 26053 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 09:03:32.569000 master-0 kubenswrapper[26053]: W0318 09:03:32.568141 26053 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 09:03:32.569000 master-0 kubenswrapper[26053]: W0318 09:03:32.568146 26053 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 09:03:32.569000 master-0 kubenswrapper[26053]: W0318 09:03:32.568150 26053 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 09:03:32.569000 master-0 kubenswrapper[26053]: W0318 09:03:32.568155 26053 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 09:03:32.569000 master-0 kubenswrapper[26053]: W0318 09:03:32.568159 26053 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 09:03:32.569000 master-0 kubenswrapper[26053]: W0318 09:03:32.568163 26053 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 09:03:32.569000 master-0 kubenswrapper[26053]: W0318 09:03:32.568166 26053 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 09:03:32.569000 master-0 kubenswrapper[26053]: W0318 09:03:32.568170 26053 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 09:03:32.569000 master-0 kubenswrapper[26053]: W0318 09:03:32.568174 26053 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 09:03:32.569800 master-0 kubenswrapper[26053]: W0318 09:03:32.568177 26053 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 09:03:32.569800 master-0 kubenswrapper[26053]: W0318 09:03:32.568180 26053 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 09:03:32.569800 master-0 kubenswrapper[26053]: W0318 09:03:32.568184 26053 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 09:03:32.569800 master-0 kubenswrapper[26053]: W0318 09:03:32.568188 26053 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 09:03:32.569800 master-0 kubenswrapper[26053]: W0318 09:03:32.568191 26053 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 09:03:32.569800 master-0 kubenswrapper[26053]: W0318 09:03:32.568195 26053 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 09:03:32.569800 master-0 kubenswrapper[26053]: W0318 09:03:32.568198 26053 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 09:03:32.569800 master-0 kubenswrapper[26053]: W0318 09:03:32.568202 26053 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 09:03:32.569800 master-0 kubenswrapper[26053]: W0318 09:03:32.568206 26053 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 09:03:32.569800 master-0 kubenswrapper[26053]: W0318 09:03:32.568210 26053 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 09:03:32.569800 master-0 kubenswrapper[26053]: W0318 09:03:32.568213 26053 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 09:03:32.569800 master-0 kubenswrapper[26053]: W0318 09:03:32.568217 26053 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 09:03:32.569800 master-0 kubenswrapper[26053]: W0318 09:03:32.568221 26053 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 09:03:32.569800 master-0 kubenswrapper[26053]: W0318 09:03:32.568227 26053 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 09:03:32.569800 master-0 kubenswrapper[26053]: W0318 09:03:32.568231 26053 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 09:03:32.569800 master-0 kubenswrapper[26053]: W0318 09:03:32.568235 26053 feature_gate.go:330] unrecognized feature gate: Example Mar 18 09:03:32.569800 master-0 kubenswrapper[26053]: W0318 09:03:32.568239 26053 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 09:03:32.569800 master-0 kubenswrapper[26053]: W0318 09:03:32.568243 26053 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 09:03:32.569800 master-0 kubenswrapper[26053]: W0318 09:03:32.568247 26053 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 09:03:32.570460 master-0 kubenswrapper[26053]: W0318 09:03:32.568251 26053 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 09:03:32.570460 master-0 kubenswrapper[26053]: W0318 09:03:32.568255 26053 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 09:03:32.570460 master-0 kubenswrapper[26053]: W0318 09:03:32.568258 26053 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 09:03:32.570460 master-0 kubenswrapper[26053]: W0318 09:03:32.568262 26053 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 09:03:32.570460 master-0 kubenswrapper[26053]: W0318 09:03:32.568265 26053 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 09:03:32.570460 master-0 kubenswrapper[26053]: W0318 09:03:32.568269 26053 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 09:03:32.570460 master-0 kubenswrapper[26053]: W0318 09:03:32.568272 26053 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 09:03:32.570460 master-0 kubenswrapper[26053]: I0318 09:03:32.568352 26053 flags.go:64] FLAG: --address="0.0.0.0" Mar 18 09:03:32.570460 master-0 kubenswrapper[26053]: I0318 09:03:32.568362 26053 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 18 09:03:32.570460 master-0 kubenswrapper[26053]: I0318 09:03:32.568368 26053 flags.go:64] FLAG: --anonymous-auth="true" Mar 18 09:03:32.570460 master-0 kubenswrapper[26053]: I0318 09:03:32.568373 26053 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 18 09:03:32.570460 master-0 kubenswrapper[26053]: I0318 09:03:32.568410 26053 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 18 09:03:32.570460 master-0 kubenswrapper[26053]: I0318 09:03:32.568415 26053 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 18 09:03:32.570460 master-0 kubenswrapper[26053]: I0318 09:03:32.568420 26053 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 18 09:03:32.570460 master-0 kubenswrapper[26053]: I0318 09:03:32.568425 26053 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 18 09:03:32.570460 master-0 kubenswrapper[26053]: I0318 09:03:32.568430 26053 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 18 09:03:32.570460 master-0 kubenswrapper[26053]: I0318 09:03:32.568434 26053 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 18 09:03:32.570460 master-0 kubenswrapper[26053]: I0318 09:03:32.568438 26053 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 18 09:03:32.570460 master-0 kubenswrapper[26053]: I0318 09:03:32.568443 26053 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 18 09:03:32.570460 master-0 kubenswrapper[26053]: I0318 09:03:32.568447 26053 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 18 09:03:32.570460 master-0 kubenswrapper[26053]: I0318 09:03:32.568451 26053 flags.go:64] FLAG: --cgroup-root="" Mar 18 09:03:32.570460 master-0 kubenswrapper[26053]: I0318 09:03:32.568457 26053 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568461 26053 flags.go:64] FLAG: --client-ca-file="" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568465 26053 flags.go:64] FLAG: --cloud-config="" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568469 26053 flags.go:64] FLAG: --cloud-provider="" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568473 26053 flags.go:64] FLAG: --cluster-dns="[]" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568480 26053 flags.go:64] FLAG: --cluster-domain="" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568484 26053 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568489 26053 flags.go:64] FLAG: --config-dir="" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568493 26053 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568498 26053 flags.go:64] FLAG: --container-log-max-files="5" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568503 26053 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568507 26053 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568512 26053 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568516 26053 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568520 26053 flags.go:64] FLAG: --contention-profiling="false" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568524 26053 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568528 26053 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568532 26053 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568538 26053 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568543 26053 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568547 26053 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568551 26053 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568556 26053 flags.go:64] FLAG: --enable-load-reader="false" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568576 26053 flags.go:64] FLAG: --enable-server="true" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568583 26053 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 18 09:03:32.571936 master-0 kubenswrapper[26053]: I0318 09:03:32.568590 26053 flags.go:64] FLAG: --event-burst="100" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568595 26053 flags.go:64] FLAG: --event-qps="50" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568600 26053 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568606 26053 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568610 26053 flags.go:64] FLAG: --eviction-hard="" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568625 26053 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568630 26053 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568634 26053 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568638 26053 flags.go:64] FLAG: --eviction-soft="" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568643 26053 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568647 26053 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568652 26053 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568656 26053 flags.go:64] FLAG: --experimental-mounter-path="" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568660 26053 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568665 26053 flags.go:64] FLAG: --fail-swap-on="true" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568669 26053 flags.go:64] FLAG: --feature-gates="" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568676 26053 flags.go:64] FLAG: --file-check-frequency="20s" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568680 26053 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568685 26053 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568690 26053 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568695 26053 flags.go:64] FLAG: --healthz-port="10248" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568700 26053 flags.go:64] FLAG: --help="false" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568704 26053 flags.go:64] FLAG: --hostname-override="" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568709 26053 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568713 26053 flags.go:64] FLAG: --http-check-frequency="20s" Mar 18 09:03:32.573070 master-0 kubenswrapper[26053]: I0318 09:03:32.568717 26053 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568722 26053 flags.go:64] FLAG: --image-credential-provider-config="" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568726 26053 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568730 26053 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568734 26053 flags.go:64] FLAG: --image-service-endpoint="" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568738 26053 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568743 26053 flags.go:64] FLAG: --kube-api-burst="100" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568747 26053 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568753 26053 flags.go:64] FLAG: --kube-api-qps="50" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568762 26053 flags.go:64] FLAG: --kube-reserved="" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568766 26053 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568771 26053 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568775 26053 flags.go:64] FLAG: --kubelet-cgroups="" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568780 26053 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568784 26053 flags.go:64] FLAG: --lock-file="" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568788 26053 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568792 26053 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568797 26053 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568803 26053 flags.go:64] FLAG: --log-json-split-stream="false" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568807 26053 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568812 26053 flags.go:64] FLAG: --log-text-split-stream="false" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568816 26053 flags.go:64] FLAG: --logging-format="text" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568821 26053 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568825 26053 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568829 26053 flags.go:64] FLAG: --manifest-url="" Mar 18 09:03:32.574071 master-0 kubenswrapper[26053]: I0318 09:03:32.568834 26053 flags.go:64] FLAG: --manifest-url-header="" Mar 18 09:03:32.575248 master-0 kubenswrapper[26053]: I0318 09:03:32.568839 26053 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 18 09:03:32.575248 master-0 kubenswrapper[26053]: I0318 09:03:32.568844 26053 flags.go:64] FLAG: --max-open-files="1000000" Mar 18 09:03:32.575248 master-0 kubenswrapper[26053]: I0318 09:03:32.568849 26053 flags.go:64] FLAG: --max-pods="110" Mar 18 09:03:32.575248 master-0 kubenswrapper[26053]: I0318 09:03:32.568853 26053 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 18 09:03:32.575248 master-0 kubenswrapper[26053]: I0318 09:03:32.568857 26053 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 18 09:03:32.575248 master-0 kubenswrapper[26053]: I0318 09:03:32.568862 26053 flags.go:64] FLAG: --memory-manager-policy="None" Mar 18 09:03:32.575248 master-0 kubenswrapper[26053]: I0318 09:03:32.568866 26053 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 18 09:03:32.575248 master-0 kubenswrapper[26053]: I0318 09:03:32.568870 26053 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 18 09:03:32.575248 master-0 kubenswrapper[26053]: I0318 09:03:32.568874 26053 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 18 09:03:32.575248 master-0 kubenswrapper[26053]: I0318 09:03:32.568878 26053 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 18 09:03:32.575248 master-0 kubenswrapper[26053]: I0318 09:03:32.568888 26053 flags.go:64] FLAG: --node-status-max-images="50" Mar 18 09:03:32.575248 master-0 kubenswrapper[26053]: I0318 09:03:32.568892 26053 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 18 09:03:32.575248 master-0 kubenswrapper[26053]: I0318 09:03:32.568897 26053 flags.go:64] FLAG: --oom-score-adj="-999" Mar 18 09:03:32.575248 master-0 kubenswrapper[26053]: I0318 09:03:32.568901 26053 flags.go:64] FLAG: --pod-cidr="" Mar 18 09:03:32.575248 master-0 kubenswrapper[26053]: I0318 09:03:32.568905 26053 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 18 09:03:32.575248 master-0 kubenswrapper[26053]: I0318 09:03:32.568913 26053 flags.go:64] FLAG: --pod-manifest-path="" Mar 18 09:03:32.575248 master-0 kubenswrapper[26053]: I0318 09:03:32.568919 26053 flags.go:64] FLAG: --pod-max-pids="-1" Mar 18 09:03:32.575248 master-0 kubenswrapper[26053]: I0318 09:03:32.568923 26053 flags.go:64] FLAG: --pods-per-core="0" Mar 18 09:03:32.575248 master-0 kubenswrapper[26053]: I0318 09:03:32.568928 26053 flags.go:64] FLAG: --port="10250" Mar 18 09:03:32.575248 master-0 kubenswrapper[26053]: I0318 09:03:32.568932 26053 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 18 09:03:32.575248 master-0 kubenswrapper[26053]: I0318 09:03:32.568936 26053 flags.go:64] FLAG: --provider-id="" Mar 18 09:03:32.575248 master-0 kubenswrapper[26053]: I0318 09:03:32.568940 26053 flags.go:64] FLAG: --qos-reserved="" Mar 18 09:03:32.575248 master-0 kubenswrapper[26053]: I0318 09:03:32.568944 26053 flags.go:64] FLAG: --read-only-port="10255" Mar 18 09:03:32.575248 master-0 kubenswrapper[26053]: I0318 09:03:32.568949 26053 flags.go:64] FLAG: --register-node="true" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.568953 26053 flags.go:64] FLAG: --register-schedulable="true" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.568957 26053 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.568967 26053 flags.go:64] FLAG: --registry-burst="10" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.568972 26053 flags.go:64] FLAG: --registry-qps="5" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.568975 26053 flags.go:64] FLAG: --reserved-cpus="" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.568979 26053 flags.go:64] FLAG: --reserved-memory="" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.568985 26053 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.568989 26053 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.568993 26053 flags.go:64] FLAG: --rotate-certificates="false" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.568997 26053 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.569001 26053 flags.go:64] FLAG: --runonce="false" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.569006 26053 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.569010 26053 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.569015 26053 flags.go:64] FLAG: --seccomp-default="false" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.569020 26053 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.569024 26053 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.569029 26053 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.569033 26053 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.569037 26053 flags.go:64] FLAG: --storage-driver-password="root" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.569042 26053 flags.go:64] FLAG: --storage-driver-secure="false" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.569046 26053 flags.go:64] FLAG: --storage-driver-table="stats" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.569050 26053 flags.go:64] FLAG: --storage-driver-user="root" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.569056 26053 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.569060 26053 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 18 09:03:32.576180 master-0 kubenswrapper[26053]: I0318 09:03:32.569067 26053 flags.go:64] FLAG: --system-cgroups="" Mar 18 09:03:32.577211 master-0 kubenswrapper[26053]: I0318 09:03:32.569071 26053 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 18 09:03:32.577211 master-0 kubenswrapper[26053]: I0318 09:03:32.569077 26053 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 18 09:03:32.577211 master-0 kubenswrapper[26053]: I0318 09:03:32.569081 26053 flags.go:64] FLAG: --tls-cert-file="" Mar 18 09:03:32.577211 master-0 kubenswrapper[26053]: I0318 09:03:32.569085 26053 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 18 09:03:32.577211 master-0 kubenswrapper[26053]: I0318 09:03:32.569091 26053 flags.go:64] FLAG: --tls-min-version="" Mar 18 09:03:32.577211 master-0 kubenswrapper[26053]: I0318 09:03:32.569096 26053 flags.go:64] FLAG: --tls-private-key-file="" Mar 18 09:03:32.577211 master-0 kubenswrapper[26053]: I0318 09:03:32.569100 26053 flags.go:64] FLAG: --topology-manager-policy="none" Mar 18 09:03:32.577211 master-0 kubenswrapper[26053]: I0318 09:03:32.569104 26053 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 18 09:03:32.577211 master-0 kubenswrapper[26053]: I0318 09:03:32.569108 26053 flags.go:64] FLAG: --topology-manager-scope="container" Mar 18 09:03:32.577211 master-0 kubenswrapper[26053]: I0318 09:03:32.569112 26053 flags.go:64] FLAG: --v="2" Mar 18 09:03:32.577211 master-0 kubenswrapper[26053]: I0318 09:03:32.569118 26053 flags.go:64] FLAG: --version="false" Mar 18 09:03:32.577211 master-0 kubenswrapper[26053]: I0318 09:03:32.569124 26053 flags.go:64] FLAG: --vmodule="" Mar 18 09:03:32.577211 master-0 kubenswrapper[26053]: I0318 09:03:32.569129 26053 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 18 09:03:32.577211 master-0 kubenswrapper[26053]: I0318 09:03:32.569134 26053 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 18 09:03:32.577211 master-0 kubenswrapper[26053]: W0318 09:03:32.569243 26053 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 09:03:32.577211 master-0 kubenswrapper[26053]: W0318 09:03:32.569249 26053 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 09:03:32.577211 master-0 kubenswrapper[26053]: W0318 09:03:32.569253 26053 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 09:03:32.577211 master-0 kubenswrapper[26053]: W0318 09:03:32.569257 26053 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 09:03:32.577211 master-0 kubenswrapper[26053]: W0318 09:03:32.569261 26053 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 09:03:32.577211 master-0 kubenswrapper[26053]: W0318 09:03:32.569265 26053 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 09:03:32.577211 master-0 kubenswrapper[26053]: W0318 09:03:32.569268 26053 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 09:03:32.577211 master-0 kubenswrapper[26053]: W0318 09:03:32.569273 26053 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 09:03:32.577211 master-0 kubenswrapper[26053]: W0318 09:03:32.569278 26053 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 09:03:32.579023 master-0 kubenswrapper[26053]: W0318 09:03:32.569282 26053 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 09:03:32.579023 master-0 kubenswrapper[26053]: W0318 09:03:32.569287 26053 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 09:03:32.579023 master-0 kubenswrapper[26053]: W0318 09:03:32.569291 26053 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 09:03:32.579023 master-0 kubenswrapper[26053]: W0318 09:03:32.569294 26053 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 09:03:32.579023 master-0 kubenswrapper[26053]: W0318 09:03:32.569298 26053 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 09:03:32.579023 master-0 kubenswrapper[26053]: W0318 09:03:32.569301 26053 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 09:03:32.579023 master-0 kubenswrapper[26053]: W0318 09:03:32.569306 26053 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 09:03:32.579023 master-0 kubenswrapper[26053]: W0318 09:03:32.569311 26053 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 09:03:32.579023 master-0 kubenswrapper[26053]: W0318 09:03:32.569317 26053 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 09:03:32.579023 master-0 kubenswrapper[26053]: W0318 09:03:32.569321 26053 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 09:03:32.579023 master-0 kubenswrapper[26053]: W0318 09:03:32.569325 26053 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 09:03:32.579023 master-0 kubenswrapper[26053]: W0318 09:03:32.569330 26053 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 09:03:32.579023 master-0 kubenswrapper[26053]: W0318 09:03:32.569334 26053 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 09:03:32.579023 master-0 kubenswrapper[26053]: W0318 09:03:32.569338 26053 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 09:03:32.579023 master-0 kubenswrapper[26053]: W0318 09:03:32.569342 26053 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 09:03:32.579023 master-0 kubenswrapper[26053]: W0318 09:03:32.569346 26053 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 09:03:32.579023 master-0 kubenswrapper[26053]: W0318 09:03:32.569349 26053 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 09:03:32.579023 master-0 kubenswrapper[26053]: W0318 09:03:32.569353 26053 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 09:03:32.579023 master-0 kubenswrapper[26053]: W0318 09:03:32.569358 26053 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 09:03:32.579858 master-0 kubenswrapper[26053]: W0318 09:03:32.569362 26053 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 09:03:32.579858 master-0 kubenswrapper[26053]: W0318 09:03:32.569365 26053 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 09:03:32.579858 master-0 kubenswrapper[26053]: W0318 09:03:32.569369 26053 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 09:03:32.579858 master-0 kubenswrapper[26053]: W0318 09:03:32.569372 26053 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 09:03:32.579858 master-0 kubenswrapper[26053]: W0318 09:03:32.569376 26053 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 09:03:32.579858 master-0 kubenswrapper[26053]: W0318 09:03:32.569380 26053 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 09:03:32.579858 master-0 kubenswrapper[26053]: W0318 09:03:32.569383 26053 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 09:03:32.579858 master-0 kubenswrapper[26053]: W0318 09:03:32.569387 26053 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 09:03:32.579858 master-0 kubenswrapper[26053]: W0318 09:03:32.569390 26053 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 09:03:32.579858 master-0 kubenswrapper[26053]: W0318 09:03:32.569394 26053 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 09:03:32.579858 master-0 kubenswrapper[26053]: W0318 09:03:32.569398 26053 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 09:03:32.579858 master-0 kubenswrapper[26053]: W0318 09:03:32.569401 26053 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 09:03:32.579858 master-0 kubenswrapper[26053]: W0318 09:03:32.569405 26053 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 09:03:32.579858 master-0 kubenswrapper[26053]: W0318 09:03:32.569408 26053 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 09:03:32.579858 master-0 kubenswrapper[26053]: W0318 09:03:32.569412 26053 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 09:03:32.579858 master-0 kubenswrapper[26053]: W0318 09:03:32.569416 26053 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 09:03:32.579858 master-0 kubenswrapper[26053]: W0318 09:03:32.569419 26053 feature_gate.go:330] unrecognized feature gate: Example Mar 18 09:03:32.579858 master-0 kubenswrapper[26053]: W0318 09:03:32.569423 26053 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 09:03:32.579858 master-0 kubenswrapper[26053]: W0318 09:03:32.569426 26053 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 09:03:32.579858 master-0 kubenswrapper[26053]: W0318 09:03:32.569431 26053 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 09:03:32.580699 master-0 kubenswrapper[26053]: W0318 09:03:32.569435 26053 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 09:03:32.580699 master-0 kubenswrapper[26053]: W0318 09:03:32.569440 26053 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 09:03:32.580699 master-0 kubenswrapper[26053]: W0318 09:03:32.569444 26053 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 09:03:32.580699 master-0 kubenswrapper[26053]: W0318 09:03:32.569448 26053 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 09:03:32.580699 master-0 kubenswrapper[26053]: W0318 09:03:32.569451 26053 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 09:03:32.580699 master-0 kubenswrapper[26053]: W0318 09:03:32.569455 26053 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 09:03:32.580699 master-0 kubenswrapper[26053]: W0318 09:03:32.569459 26053 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 09:03:32.580699 master-0 kubenswrapper[26053]: W0318 09:03:32.569462 26053 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 09:03:32.580699 master-0 kubenswrapper[26053]: W0318 09:03:32.569466 26053 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 09:03:32.580699 master-0 kubenswrapper[26053]: W0318 09:03:32.569469 26053 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 09:03:32.580699 master-0 kubenswrapper[26053]: W0318 09:03:32.569473 26053 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 09:03:32.580699 master-0 kubenswrapper[26053]: W0318 09:03:32.569478 26053 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 09:03:32.580699 master-0 kubenswrapper[26053]: W0318 09:03:32.569482 26053 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 09:03:32.580699 master-0 kubenswrapper[26053]: W0318 09:03:32.569486 26053 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 09:03:32.580699 master-0 kubenswrapper[26053]: W0318 09:03:32.569490 26053 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 09:03:32.580699 master-0 kubenswrapper[26053]: W0318 09:03:32.569495 26053 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 09:03:32.580699 master-0 kubenswrapper[26053]: W0318 09:03:32.569499 26053 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 09:03:32.580699 master-0 kubenswrapper[26053]: W0318 09:03:32.569502 26053 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 09:03:32.580699 master-0 kubenswrapper[26053]: W0318 09:03:32.569506 26053 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 09:03:32.582784 master-0 kubenswrapper[26053]: W0318 09:03:32.569509 26053 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 09:03:32.582784 master-0 kubenswrapper[26053]: W0318 09:03:32.569513 26053 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 09:03:32.582784 master-0 kubenswrapper[26053]: W0318 09:03:32.569517 26053 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 09:03:32.582784 master-0 kubenswrapper[26053]: W0318 09:03:32.569520 26053 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 09:03:32.582784 master-0 kubenswrapper[26053]: W0318 09:03:32.569524 26053 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 09:03:32.582784 master-0 kubenswrapper[26053]: I0318 09:03:32.569535 26053 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 09:03:32.582784 master-0 kubenswrapper[26053]: I0318 09:03:32.573616 26053 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 18 09:03:32.582784 master-0 kubenswrapper[26053]: I0318 09:03:32.573634 26053 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 18 09:03:32.582784 master-0 kubenswrapper[26053]: W0318 09:03:32.573695 26053 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 09:03:32.582784 master-0 kubenswrapper[26053]: W0318 09:03:32.573701 26053 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 09:03:32.582784 master-0 kubenswrapper[26053]: W0318 09:03:32.573706 26053 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 09:03:32.582784 master-0 kubenswrapper[26053]: W0318 09:03:32.573710 26053 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 09:03:32.582784 master-0 kubenswrapper[26053]: W0318 09:03:32.573714 26053 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 09:03:32.582784 master-0 kubenswrapper[26053]: W0318 09:03:32.573718 26053 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 09:03:32.582784 master-0 kubenswrapper[26053]: W0318 09:03:32.573723 26053 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 09:03:32.583466 master-0 kubenswrapper[26053]: W0318 09:03:32.573727 26053 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 09:03:32.583466 master-0 kubenswrapper[26053]: W0318 09:03:32.573731 26053 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 09:03:32.583466 master-0 kubenswrapper[26053]: W0318 09:03:32.573735 26053 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 09:03:32.583466 master-0 kubenswrapper[26053]: W0318 09:03:32.573738 26053 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 09:03:32.583466 master-0 kubenswrapper[26053]: W0318 09:03:32.573742 26053 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 09:03:32.583466 master-0 kubenswrapper[26053]: W0318 09:03:32.573747 26053 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 09:03:32.583466 master-0 kubenswrapper[26053]: W0318 09:03:32.573753 26053 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 09:03:32.583466 master-0 kubenswrapper[26053]: W0318 09:03:32.573757 26053 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 09:03:32.583466 master-0 kubenswrapper[26053]: W0318 09:03:32.573761 26053 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 09:03:32.583466 master-0 kubenswrapper[26053]: W0318 09:03:32.573764 26053 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 09:03:32.583466 master-0 kubenswrapper[26053]: W0318 09:03:32.573768 26053 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 09:03:32.583466 master-0 kubenswrapper[26053]: W0318 09:03:32.573772 26053 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 09:03:32.583466 master-0 kubenswrapper[26053]: W0318 09:03:32.573775 26053 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 09:03:32.583466 master-0 kubenswrapper[26053]: W0318 09:03:32.573779 26053 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 09:03:32.583466 master-0 kubenswrapper[26053]: W0318 09:03:32.573782 26053 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 09:03:32.583466 master-0 kubenswrapper[26053]: W0318 09:03:32.573786 26053 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 09:03:32.583466 master-0 kubenswrapper[26053]: W0318 09:03:32.573790 26053 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 09:03:32.583466 master-0 kubenswrapper[26053]: W0318 09:03:32.573794 26053 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 09:03:32.583466 master-0 kubenswrapper[26053]: W0318 09:03:32.573797 26053 feature_gate.go:330] unrecognized feature gate: Example Mar 18 09:03:32.583466 master-0 kubenswrapper[26053]: W0318 09:03:32.573801 26053 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 09:03:32.584311 master-0 kubenswrapper[26053]: W0318 09:03:32.573805 26053 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 09:03:32.584311 master-0 kubenswrapper[26053]: W0318 09:03:32.573809 26053 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 09:03:32.584311 master-0 kubenswrapper[26053]: W0318 09:03:32.573813 26053 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 09:03:32.584311 master-0 kubenswrapper[26053]: W0318 09:03:32.573817 26053 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 09:03:32.584311 master-0 kubenswrapper[26053]: W0318 09:03:32.573822 26053 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 09:03:32.584311 master-0 kubenswrapper[26053]: W0318 09:03:32.573826 26053 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 09:03:32.584311 master-0 kubenswrapper[26053]: W0318 09:03:32.573830 26053 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 09:03:32.584311 master-0 kubenswrapper[26053]: W0318 09:03:32.573834 26053 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 09:03:32.584311 master-0 kubenswrapper[26053]: W0318 09:03:32.573839 26053 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 09:03:32.584311 master-0 kubenswrapper[26053]: W0318 09:03:32.573844 26053 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 09:03:32.584311 master-0 kubenswrapper[26053]: W0318 09:03:32.573849 26053 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 09:03:32.584311 master-0 kubenswrapper[26053]: W0318 09:03:32.573853 26053 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 09:03:32.584311 master-0 kubenswrapper[26053]: W0318 09:03:32.573857 26053 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 09:03:32.584311 master-0 kubenswrapper[26053]: W0318 09:03:32.573861 26053 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 09:03:32.584311 master-0 kubenswrapper[26053]: W0318 09:03:32.573865 26053 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 09:03:32.584311 master-0 kubenswrapper[26053]: W0318 09:03:32.573868 26053 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 09:03:32.584311 master-0 kubenswrapper[26053]: W0318 09:03:32.573872 26053 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 09:03:32.584311 master-0 kubenswrapper[26053]: W0318 09:03:32.573876 26053 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 09:03:32.584311 master-0 kubenswrapper[26053]: W0318 09:03:32.573879 26053 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 09:03:32.585112 master-0 kubenswrapper[26053]: W0318 09:03:32.573884 26053 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 09:03:32.585112 master-0 kubenswrapper[26053]: W0318 09:03:32.573887 26053 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 09:03:32.585112 master-0 kubenswrapper[26053]: W0318 09:03:32.573891 26053 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 09:03:32.585112 master-0 kubenswrapper[26053]: W0318 09:03:32.573895 26053 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 09:03:32.585112 master-0 kubenswrapper[26053]: W0318 09:03:32.573898 26053 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 09:03:32.585112 master-0 kubenswrapper[26053]: W0318 09:03:32.573902 26053 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 09:03:32.585112 master-0 kubenswrapper[26053]: W0318 09:03:32.573906 26053 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 09:03:32.585112 master-0 kubenswrapper[26053]: W0318 09:03:32.573910 26053 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 09:03:32.585112 master-0 kubenswrapper[26053]: W0318 09:03:32.573913 26053 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 09:03:32.585112 master-0 kubenswrapper[26053]: W0318 09:03:32.573917 26053 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 09:03:32.585112 master-0 kubenswrapper[26053]: W0318 09:03:32.573920 26053 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 09:03:32.585112 master-0 kubenswrapper[26053]: W0318 09:03:32.573924 26053 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 09:03:32.585112 master-0 kubenswrapper[26053]: W0318 09:03:32.573927 26053 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 09:03:32.585112 master-0 kubenswrapper[26053]: W0318 09:03:32.573931 26053 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 09:03:32.585112 master-0 kubenswrapper[26053]: W0318 09:03:32.573935 26053 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 09:03:32.585112 master-0 kubenswrapper[26053]: W0318 09:03:32.573938 26053 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 09:03:32.585112 master-0 kubenswrapper[26053]: W0318 09:03:32.573942 26053 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 09:03:32.585112 master-0 kubenswrapper[26053]: W0318 09:03:32.573945 26053 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 09:03:32.585112 master-0 kubenswrapper[26053]: W0318 09:03:32.573949 26053 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 09:03:32.585112 master-0 kubenswrapper[26053]: W0318 09:03:32.573953 26053 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 09:03:32.585954 master-0 kubenswrapper[26053]: W0318 09:03:32.573956 26053 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 09:03:32.585954 master-0 kubenswrapper[26053]: W0318 09:03:32.573960 26053 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 09:03:32.585954 master-0 kubenswrapper[26053]: W0318 09:03:32.573964 26053 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 09:03:32.585954 master-0 kubenswrapper[26053]: W0318 09:03:32.573968 26053 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 09:03:32.585954 master-0 kubenswrapper[26053]: W0318 09:03:32.573972 26053 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 09:03:32.585954 master-0 kubenswrapper[26053]: W0318 09:03:32.573976 26053 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 09:03:32.585954 master-0 kubenswrapper[26053]: I0318 09:03:32.573982 26053 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 09:03:32.585954 master-0 kubenswrapper[26053]: W0318 09:03:32.574089 26053 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 18 09:03:32.585954 master-0 kubenswrapper[26053]: W0318 09:03:32.574097 26053 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 18 09:03:32.585954 master-0 kubenswrapper[26053]: W0318 09:03:32.574101 26053 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 18 09:03:32.585954 master-0 kubenswrapper[26053]: W0318 09:03:32.574105 26053 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 18 09:03:32.585954 master-0 kubenswrapper[26053]: W0318 09:03:32.574109 26053 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 18 09:03:32.585954 master-0 kubenswrapper[26053]: W0318 09:03:32.574113 26053 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 18 09:03:32.585954 master-0 kubenswrapper[26053]: W0318 09:03:32.574117 26053 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 18 09:03:32.585954 master-0 kubenswrapper[26053]: W0318 09:03:32.574120 26053 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 18 09:03:32.586487 master-0 kubenswrapper[26053]: W0318 09:03:32.574124 26053 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 18 09:03:32.586487 master-0 kubenswrapper[26053]: W0318 09:03:32.574129 26053 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 18 09:03:32.586487 master-0 kubenswrapper[26053]: W0318 09:03:32.574134 26053 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 18 09:03:32.586487 master-0 kubenswrapper[26053]: W0318 09:03:32.574138 26053 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 18 09:03:32.586487 master-0 kubenswrapper[26053]: W0318 09:03:32.574142 26053 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 18 09:03:32.586487 master-0 kubenswrapper[26053]: W0318 09:03:32.574146 26053 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 18 09:03:32.586487 master-0 kubenswrapper[26053]: W0318 09:03:32.574149 26053 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 18 09:03:32.586487 master-0 kubenswrapper[26053]: W0318 09:03:32.574153 26053 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 18 09:03:32.586487 master-0 kubenswrapper[26053]: W0318 09:03:32.574157 26053 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 18 09:03:32.586487 master-0 kubenswrapper[26053]: W0318 09:03:32.574161 26053 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 18 09:03:32.586487 master-0 kubenswrapper[26053]: W0318 09:03:32.574164 26053 feature_gate.go:330] unrecognized feature gate: Example Mar 18 09:03:32.586487 master-0 kubenswrapper[26053]: W0318 09:03:32.574168 26053 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 18 09:03:32.586487 master-0 kubenswrapper[26053]: W0318 09:03:32.574171 26053 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 18 09:03:32.586487 master-0 kubenswrapper[26053]: W0318 09:03:32.574175 26053 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 18 09:03:32.586487 master-0 kubenswrapper[26053]: W0318 09:03:32.574179 26053 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 18 09:03:32.586487 master-0 kubenswrapper[26053]: W0318 09:03:32.574183 26053 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 18 09:03:32.586487 master-0 kubenswrapper[26053]: W0318 09:03:32.574187 26053 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 18 09:03:32.586487 master-0 kubenswrapper[26053]: W0318 09:03:32.574190 26053 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 18 09:03:32.586487 master-0 kubenswrapper[26053]: W0318 09:03:32.574194 26053 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 18 09:03:32.587345 master-0 kubenswrapper[26053]: W0318 09:03:32.574198 26053 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 18 09:03:32.587345 master-0 kubenswrapper[26053]: W0318 09:03:32.574201 26053 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 18 09:03:32.587345 master-0 kubenswrapper[26053]: W0318 09:03:32.574205 26053 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 18 09:03:32.587345 master-0 kubenswrapper[26053]: W0318 09:03:32.574208 26053 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 18 09:03:32.587345 master-0 kubenswrapper[26053]: W0318 09:03:32.574212 26053 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 18 09:03:32.587345 master-0 kubenswrapper[26053]: W0318 09:03:32.574217 26053 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 18 09:03:32.587345 master-0 kubenswrapper[26053]: W0318 09:03:32.574222 26053 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 18 09:03:32.587345 master-0 kubenswrapper[26053]: W0318 09:03:32.574226 26053 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 18 09:03:32.587345 master-0 kubenswrapper[26053]: W0318 09:03:32.574230 26053 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 18 09:03:32.587345 master-0 kubenswrapper[26053]: W0318 09:03:32.574234 26053 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 18 09:03:32.587345 master-0 kubenswrapper[26053]: W0318 09:03:32.574238 26053 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 18 09:03:32.587345 master-0 kubenswrapper[26053]: W0318 09:03:32.574242 26053 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 18 09:03:32.587345 master-0 kubenswrapper[26053]: W0318 09:03:32.574246 26053 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 18 09:03:32.587345 master-0 kubenswrapper[26053]: W0318 09:03:32.574250 26053 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 18 09:03:32.587345 master-0 kubenswrapper[26053]: W0318 09:03:32.574253 26053 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 18 09:03:32.587345 master-0 kubenswrapper[26053]: W0318 09:03:32.574257 26053 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 18 09:03:32.587345 master-0 kubenswrapper[26053]: W0318 09:03:32.574261 26053 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 18 09:03:32.587345 master-0 kubenswrapper[26053]: W0318 09:03:32.574264 26053 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 18 09:03:32.587345 master-0 kubenswrapper[26053]: W0318 09:03:32.574268 26053 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 18 09:03:32.588111 master-0 kubenswrapper[26053]: W0318 09:03:32.574272 26053 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 18 09:03:32.588111 master-0 kubenswrapper[26053]: W0318 09:03:32.574275 26053 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 18 09:03:32.588111 master-0 kubenswrapper[26053]: W0318 09:03:32.574279 26053 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 18 09:03:32.588111 master-0 kubenswrapper[26053]: W0318 09:03:32.574282 26053 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 18 09:03:32.588111 master-0 kubenswrapper[26053]: W0318 09:03:32.574286 26053 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 18 09:03:32.588111 master-0 kubenswrapper[26053]: W0318 09:03:32.574290 26053 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 18 09:03:32.588111 master-0 kubenswrapper[26053]: W0318 09:03:32.574348 26053 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 18 09:03:32.588111 master-0 kubenswrapper[26053]: W0318 09:03:32.574352 26053 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 18 09:03:32.588111 master-0 kubenswrapper[26053]: W0318 09:03:32.574356 26053 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 18 09:03:32.588111 master-0 kubenswrapper[26053]: W0318 09:03:32.574360 26053 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 18 09:03:32.588111 master-0 kubenswrapper[26053]: W0318 09:03:32.574363 26053 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 18 09:03:32.588111 master-0 kubenswrapper[26053]: W0318 09:03:32.574367 26053 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 18 09:03:32.588111 master-0 kubenswrapper[26053]: W0318 09:03:32.574371 26053 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 18 09:03:32.588111 master-0 kubenswrapper[26053]: W0318 09:03:32.574374 26053 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 18 09:03:32.588111 master-0 kubenswrapper[26053]: W0318 09:03:32.574378 26053 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 18 09:03:32.588111 master-0 kubenswrapper[26053]: W0318 09:03:32.574382 26053 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 18 09:03:32.588111 master-0 kubenswrapper[26053]: W0318 09:03:32.574387 26053 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 18 09:03:32.588111 master-0 kubenswrapper[26053]: W0318 09:03:32.574390 26053 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 18 09:03:32.588111 master-0 kubenswrapper[26053]: W0318 09:03:32.574395 26053 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 18 09:03:32.588111 master-0 kubenswrapper[26053]: W0318 09:03:32.574400 26053 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 18 09:03:32.588876 master-0 kubenswrapper[26053]: W0318 09:03:32.574404 26053 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 18 09:03:32.588876 master-0 kubenswrapper[26053]: W0318 09:03:32.574407 26053 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 18 09:03:32.588876 master-0 kubenswrapper[26053]: W0318 09:03:32.574411 26053 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 18 09:03:32.588876 master-0 kubenswrapper[26053]: W0318 09:03:32.574418 26053 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 18 09:03:32.588876 master-0 kubenswrapper[26053]: W0318 09:03:32.574422 26053 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 18 09:03:32.588876 master-0 kubenswrapper[26053]: W0318 09:03:32.574425 26053 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 18 09:03:32.588876 master-0 kubenswrapper[26053]: I0318 09:03:32.574431 26053 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 18 09:03:32.588876 master-0 kubenswrapper[26053]: I0318 09:03:32.574596 26053 server.go:940] "Client rotation is on, will bootstrap in background" Mar 18 09:03:32.588876 master-0 kubenswrapper[26053]: I0318 09:03:32.576419 26053 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 18 09:03:32.588876 master-0 kubenswrapper[26053]: I0318 09:03:32.576511 26053 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 18 09:03:32.588876 master-0 kubenswrapper[26053]: I0318 09:03:32.576774 26053 server.go:997] "Starting client certificate rotation" Mar 18 09:03:32.588876 master-0 kubenswrapper[26053]: I0318 09:03:32.576788 26053 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 18 09:03:32.588876 master-0 kubenswrapper[26053]: I0318 09:03:32.576960 26053 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-19 08:38:39 +0000 UTC, rotation deadline is 2026-03-19 04:40:37.21266164 +0000 UTC Mar 18 09:03:32.589337 master-0 kubenswrapper[26053]: I0318 09:03:32.577045 26053 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h37m4.635619719s for next certificate rotation Mar 18 09:03:32.589337 master-0 kubenswrapper[26053]: I0318 09:03:32.577478 26053 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 09:03:32.589337 master-0 kubenswrapper[26053]: I0318 09:03:32.578672 26053 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 18 09:03:32.589337 master-0 kubenswrapper[26053]: I0318 09:03:32.581664 26053 log.go:25] "Validated CRI v1 runtime API" Mar 18 09:03:32.589337 master-0 kubenswrapper[26053]: I0318 09:03:32.586582 26053 log.go:25] "Validated CRI v1 image API" Mar 18 09:03:32.589337 master-0 kubenswrapper[26053]: I0318 09:03:32.587817 26053 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 18 09:03:32.599383 master-0 kubenswrapper[26053]: I0318 09:03:32.597492 26053 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4 c54ba44d-560c-4408-b24b-989ec8b7c22d:/dev/vda3] Mar 18 09:03:32.599468 master-0 kubenswrapper[26053]: I0318 09:03:32.597538 26053 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0736cc0a2848f729264829f385cc50d1c800409917495f6cb40f4213a06e4f6a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0736cc0a2848f729264829f385cc50d1c800409917495f6cb40f4213a06e4f6a/userdata/shm major:0 minor:1158 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/08f21128e07d665939c2d0c41577d2352ec3b22e6dbd82f3846839a110c79e2d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/08f21128e07d665939c2d0c41577d2352ec3b22e6dbd82f3846839a110c79e2d/userdata/shm major:0 minor:1017 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0dc14cc88891929c02d96732c893456d82425d1db68dfef9ae085c39e17cfc21/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0dc14cc88891929c02d96732c893456d82425d1db68dfef9ae085c39e17cfc21/userdata/shm major:0 minor:564 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/111cad7658297e20b09bb8a0322469d96e6944a07fa207a070a18f99b4bbfc85/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/111cad7658297e20b09bb8a0322469d96e6944a07fa207a070a18f99b4bbfc85/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1307b515e04cb833c9f1e9d6e14d178f8505b7f9e092ede28bdd570b3c7ab5f2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1307b515e04cb833c9f1e9d6e14d178f8505b7f9e092ede28bdd570b3c7ab5f2/userdata/shm major:0 minor:780 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/13a068e44f036eb5ea2827a8a27172c655290a87fa0428a7b71b67b8505f2fbb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/13a068e44f036eb5ea2827a8a27172c655290a87fa0428a7b71b67b8505f2fbb/userdata/shm major:0 minor:92 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/16a1ea739ab8f65d8a4f8df45a743988b1ba71abf3b8764f36d6dbcba21ceced/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/16a1ea739ab8f65d8a4f8df45a743988b1ba71abf3b8764f36d6dbcba21ceced/userdata/shm major:0 minor:762 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/16c8b28b1f6483c7c92765f4231253e359cc1215e5ae5f3124d625cfaec91b4d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/16c8b28b1f6483c7c92765f4231253e359cc1215e5ae5f3124d625cfaec91b4d/userdata/shm major:0 minor:765 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1e613a3e031cd6ea2569b0de90a9eb4c58efa7686815ccbe34135809d0dec254/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1e613a3e031cd6ea2569b0de90a9eb4c58efa7686815ccbe34135809d0dec254/userdata/shm major:0 minor:776 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1fc4aaf36f3d357358d477445a6e46751b37db5a1b5d446f108b4d2b190e035d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1fc4aaf36f3d357358d477445a6e46751b37db5a1b5d446f108b4d2b190e035d/userdata/shm major:0 minor:113 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2078b34d8e519a78ac9e8ea0c87c5a7d54f6ce3303c5c0ec38a03f5d0d12a9d1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2078b34d8e519a78ac9e8ea0c87c5a7d54f6ce3303c5c0ec38a03f5d0d12a9d1/userdata/shm major:0 minor:99 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2268116be19023b1c8385358efae4da2f05525a23575585605fbe5052dde322b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2268116be19023b1c8385358efae4da2f05525a23575585605fbe5052dde322b/userdata/shm major:0 minor:261 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/25198ccffb73a61a0d44324871a4bf2386567e2212f2fa517102359c9971071f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/25198ccffb73a61a0d44324871a4bf2386567e2212f2fa517102359c9971071f/userdata/shm major:0 minor:779 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/26feed0c101f6d451867599cf55613a680653ef7d844a071df5d94dd231f464f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/26feed0c101f6d451867599cf55613a680653ef7d844a071df5d94dd231f464f/userdata/shm major:0 minor:253 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2aab1c96f4b8ffa517d8d222973d3490b850d57a2945be4e4157f78f55403973/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2aab1c96f4b8ffa517d8d222973d3490b850d57a2945be4e4157f78f55403973/userdata/shm major:0 minor:791 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2b116d558e216a649546918f836612a6ac48d94d4e8f2cb72966b98c7cf4e449/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2b116d558e216a649546918f836612a6ac48d94d4e8f2cb72966b98c7cf4e449/userdata/shm major:0 minor:397 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/30c4f18dcbcc9f18a43ee88da7092e594b453df2ae8b1fce02caf6e61a63685f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/30c4f18dcbcc9f18a43ee88da7092e594b453df2ae8b1fce02caf6e61a63685f/userdata/shm major:0 minor:112 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/317bca26800a314970aa73cabc27ffb650dc50aed545acb8b5a9d2409b853eae/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/317bca26800a314970aa73cabc27ffb650dc50aed545acb8b5a9d2409b853eae/userdata/shm major:0 minor:1078 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/34190ff24c5d64d3f04ee73c9371b2fe699e4dc756931f93643f7e454d205294/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/34190ff24c5d64d3f04ee73c9371b2fe699e4dc756931f93643f7e454d205294/userdata/shm major:0 minor:518 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/345478a9f31c33009fc0312365cde9a2e83761bfa6df9d1f8521197057d19304/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/345478a9f31c33009fc0312365cde9a2e83761bfa6df9d1f8521197057d19304/userdata/shm major:0 minor:494 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3827efb6815dbb16a6fe46aec77900fafde56c2e8c5cdf8a95de12d8f38843f8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3827efb6815dbb16a6fe46aec77900fafde56c2e8c5cdf8a95de12d8f38843f8/userdata/shm major:0 minor:275 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3a452f53888d80954ddda76e2511f1f532656825d47ec252e4f76b2a75b26a96/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3a452f53888d80954ddda76e2511f1f532656825d47ec252e4f76b2a75b26a96/userdata/shm major:0 minor:498 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3ac5162bd81def353052ebf597421eb671cb88aec927ef74f518a70f421eb249/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3ac5162bd81def353052ebf597421eb671cb88aec927ef74f518a70f421eb249/userdata/shm major:0 minor:502 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3b274035f2ac7d46626545fefa2691ceffb107580cf6cf569c0be6a2b76a628f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3b274035f2ac7d46626545fefa2691ceffb107580cf6cf569c0be6a2b76a628f/userdata/shm major:0 minor:426 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3bf63c21f45da93caf06a2a338ffeb21874020b8683b0b12c95244b028fbf72a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3bf63c21f45da93caf06a2a338ffeb21874020b8683b0b12c95244b028fbf72a/userdata/shm major:0 minor:327 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3ec66dd169d08be1b920bf1865303a7a46910236130e7f06946e53376569a93c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3ec66dd169d08be1b920bf1865303a7a46910236130e7f06946e53376569a93c/userdata/shm major:0 minor:495 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4a9c798432c4910d57904b2bd4d441bf0df0839546f138cc70e48ec5d9012c6a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4a9c798432c4910d57904b2bd4d441bf0df0839546f138cc70e48ec5d9012c6a/userdata/shm major:0 minor:255 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5185a35bdc4ad1949570c4b3508eb6c84e58ffd468abe9bcc3bb2a0cb406ece2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5185a35bdc4ad1949570c4b3508eb6c84e58ffd468abe9bcc3bb2a0cb406ece2/userdata/shm major:0 minor:527 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/57683f550936db1e32a4ce9e0772053116a76decd109678101700be85f0fac15/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/57683f550936db1e32a4ce9e0772053116a76decd109678101700be85f0fac15/userdata/shm major:0 minor:241 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/62a17de80f64346bbd0c33255e42240333a632bbd8223bc931f3c908f3c47ad2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/62a17de80f64346bbd0c33255e42240333a632bbd8223bc931f3c908f3c47ad2/userdata/shm major:0 minor:774 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/64e6daddf9e1c75183bc383ad71913a134e81a48cb25bcfeb9ca74c12a1be908/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/64e6daddf9e1c75183bc383ad71913a134e81a48cb25bcfeb9ca74c12a1be908/userdata/shm major:0 minor:155 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6f7fc65d624ce13d22d22ba96da2bcd01a27c00fbe5c72b2803f8ccbc5a1dae8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6f7fc65d624ce13d22d22ba96da2bcd01a27c00fbe5c72b2803f8ccbc5a1dae8/userdata/shm major:0 minor:64 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/74b42a82fad4fc08801bc253d1dad3a48f5984717f93c0a00de7af542db7236a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/74b42a82fad4fc08801bc253d1dad3a48f5984717f93c0a00de7af542db7236a/userdata/shm major:0 minor:949 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7614f67ab42a92a0cedef41e5a4853cd6e5b7388a0d9d5d3571435c2df397b78/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7614f67ab42a92a0cedef41e5a4853cd6e5b7388a0d9d5d3571435c2df397b78/userdata/shm major:0 minor:905 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/78e813f78215ce3e16f2984fa206660096f7ce143773316d81ca9ea51d037b30/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/78e813f78215ce3e16f2984fa206660096f7ce143773316d81ca9ea51d037b30/userdata/shm major:0 minor:520 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7d99052b3134ac6e3a86c06ba3a47b78c6cc784b483d36aa7d9f44db2d29bc24/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7d99052b3134ac6e3a86c06ba3a47b78c6cc784b483d36aa7d9f44db2d29bc24/userdata/shm major:0 minor:784 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7f67914817f6c7225b0d1e0411ea65a3f10d85891389f6bc9dcac2e2540a11b6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7f67914817f6c7225b0d1e0411ea65a3f10d85891389f6bc9dcac2e2540a11b6/userdata/shm major:0 minor:322 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/81206bcb1647b4c08efbfd17569fc7b2653680bdd8064b363b1831673479ee1e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/81206bcb1647b4c08efbfd17569fc7b2653680bdd8064b363b1831673479ee1e/userdata/shm major:0 minor:148 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8635320a4b36d9fe143361fc99701a8c79f38835939cb2e0d3b2cf8ebf88349b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8635320a4b36d9fe143361fc99701a8c79f38835939cb2e0d3b2cf8ebf88349b/userdata/shm major:0 minor:271 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/89bd968ec5efc46c09a448832705d02b17ad02bc6a428167a08a2238bdb031ed/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/89bd968ec5efc46c09a448832705d02b17ad02bc6a428167a08a2238bdb031ed/userdata/shm major:0 minor:760 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8aef2deed01150bfe4043851c63a0e6b97fd934c62137327d4f1c10f4beb1f04/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8aef2deed01150bfe4043851c63a0e6b97fd934c62137327d4f1c10f4beb1f04/userdata/shm major:0 minor:154 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8e583348603a749d4e556bba036d041822b1ae41ee887fca821dc33edae65947/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8e583348603a749d4e556bba036d041822b1ae41ee887fca821dc33edae65947/userdata/shm major:0 minor:250 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/936c1c5ea7d8a039544de89341bf00b6792ab44d21cf236ad59bfd20a0a51ad9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/936c1c5ea7d8a039544de89341bf00b6792ab44d21cf236ad59bfd20a0a51ad9/userdata/shm major:0 minor:788 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/99b24b432d9d961efa29c66242b9310a2073ba8bdb85f3ff964081d7dab2d588/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/99b24b432d9d961efa29c66242b9310a2073ba8bdb85f3ff964081d7dab2d588/userdata/shm major:0 minor:741 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9d66a0e1a66af3412b18eaf6bb7d49b378aad4df6e4a3ab8703f0492b2a8b438/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9d66a0e1a66af3412b18eaf6bb7d49b378aad4df6e4a3ab8703f0492b2a8b438/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9fe02104a8ebb638006892092dba78285ba64eb0d3e1c75a7de249822d587f12/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9fe02104a8ebb638006892092dba78285ba64eb0d3e1c75a7de249822d587f12/userdata/shm major:0 minor:773 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a31ec04000ce15454ff93da1265406bc1c4ac93430867ce448a6dc1acd1c1097/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a31ec04000ce15454ff93da1265406bc1c4ac93430867ce448a6dc1acd1c1097/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a4cdf17679fe34b2ebe526ed953d298c257540b9e977b6d7801fbe8541796904/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a4cdf17679fe34b2ebe526ed953d298c257540b9e977b6d7801fbe8541796904/userdata/shm major:0 minor:1074 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ad6bcec915e6b33c36d0b67453718438f61e0a96034cf76c3052d8a1d9e3df06/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ad6bcec915e6b33c36d0b67453718438f61e0a96034cf76c3052d8a1d9e3df06/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aea03d504ef2f838af66f123ab31966d30cbe948b0b47dc0feb84acc63bbf656/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aea03d504ef2f838af66f123ab31966d30cbe948b0b47dc0feb84acc63bbf656/userdata/shm major:0 minor:767 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b5d41e3233b622c13ba073282af1bdf3d224e46b75a003c04d3f6b78e4a19cd2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b5d41e3233b622c13ba073282af1bdf3d224e46b75a003c04d3f6b78e4a19cd2/userdata/shm major:0 minor:308 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b72ac994264149152fe27ab0a6c3a137789afbe22f9ace579dcf4e093554cfc8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b72ac994264149152fe27ab0a6c3a137789afbe22f9ace579dcf4e093554cfc8/userdata/shm major:0 minor:794 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b7990ab48fdb41a5eca1f84526ed3e4682864205c2abfda2c698a85c11f23f89/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b7990ab48fdb41a5eca1f84526ed3e4682864205c2abfda2c698a85c11f23f89/userdata/shm major:0 minor:781 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b949a573f922a32d775357bfcb732ec6a7748990727195e8e189e898f3802768/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b949a573f922a32d775357bfcb732ec6a7748990727195e8e189e898f3802768/userdata/shm major:0 minor:55 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b988232227aa085a178c31ee083231aab09e2347dc1af469f15feb00c41b0d1d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b988232227aa085a178c31ee083231aab09e2347dc1af469f15feb00c41b0d1d/userdata/shm major:0 minor:263 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ba34b3933aeb088c8a44bf92497699577ba58b333ed292879af37568495f962f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ba34b3933aeb088c8a44bf92497699577ba58b333ed292879af37568495f962f/userdata/shm major:0 minor:276 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bf2e729c77c8dcc1816b63b2326e6f2b5171c3d35ed8802a8a640112eae85e62/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bf2e729c77c8dcc1816b63b2326e6f2b5171c3d35ed8802a8a640112eae85e62/userdata/shm major:0 minor:481 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bfd8810c464d77aec01f793c5157b8c1f1263372b617d164c28d17b0fec09dfe/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bfd8810c464d77aec01f793c5157b8c1f1263372b617d164c28d17b0fec09dfe/userdata/shm major:0 minor:82 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c39b790e4f0dba710e842c418340b16d46173e0451560b3e7fe743c5f356666c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c39b790e4f0dba710e842c418340b16d46173e0451560b3e7fe743c5f356666c/userdata/shm major:0 minor:853 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c3c61954e21feda03f422b20f9d63bd6912c405f9f67a85dab1db1f6274782fd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c3c61954e21feda03f422b20f9d63bd6912c405f9f67a85dab1db1f6274782fd/userdata/shm major:0 minor:968 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c44219a166b17d244712a88198d9aa0d215e4b99c0debae8766fc702ed86eb66/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c44219a166b17d244712a88198d9aa0d215e4b99c0debae8766fc702ed86eb66/userdata/shm major:0 minor:1130 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c445746454631d8ce061d0857763b308446517ac6a8ca09e1933cec8fcfb6a97/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c445746454631d8ce061d0857763b308446517ac6a8ca09e1933cec8fcfb6a97/userdata/shm major:0 minor:238 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c5b1f5eb93f4781ad7eb457481d37161ebc8d0cd97fd5fc8d694689aa1b5790c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c5b1f5eb93f4781ad7eb457481d37161ebc8d0cd97fd5fc8d694689aa1b5790c/userdata/shm major:0 minor:606 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c5f7a3fa8c5b36c0a6bb152021a58a30801cd5a1c1d2647cef87fa01a0bdd128/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c5f7a3fa8c5b36c0a6bb152021a58a30801cd5a1c1d2647cef87fa01a0bdd128/userdata/shm major:0 minor:114 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ca6a0275fcdb4cece62e11057aa43e164472b8187f168d1b56f7436a566a153a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ca6a0275fcdb4cece62e11057aa43e164472b8187f168d1b56f7436a566a153a/userdata/shm major:0 minor:785 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d08575c558c437f11dbc3ff61697000e9d98f0ee2f13a6f88c21e791f90d00ab/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d08575c558c437f11dbc3ff61697000e9d98f0ee2f13a6f88c21e791f90d00ab/userdata/shm major:0 minor:420 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d6446762bc6a0b43e14b052b6b1fde0273d338b8feb7a11225c2093e688292fc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d6446762bc6a0b43e14b052b6b1fde0273d338b8feb7a11225c2093e688292fc/userdata/shm major:0 minor:598 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d6ccfac081e99c6c412564f51ffac7d61d3130a5f00a98585c4f3e1f5ce5443d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d6ccfac081e99c6c412564f51ffac7d61d3130a5f00a98585c4f3e1f5ce5443d/userdata/shm major:0 minor:771 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d6ff7b83413c43450a6bf628dcc2a6106bc260e7200bd01ce6f1ed9cc232ecc2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d6ff7b83413c43450a6bf628dcc2a6106bc260e7200bd01ce6f1ed9cc232ecc2/userdata/shm major:0 minor:264 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/de5504f4eb957b55e61d3335016f112615d1ef2e199a2abbfb8d8f21cdee899c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/de5504f4eb957b55e61d3335016f112615d1ef2e199a2abbfb8d8f21cdee899c/userdata/shm major:0 minor:444 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e917de8a6a8f9b1b1c6c325604e10e91f09c06b26f45f002fa62fa96185aa27a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e917de8a6a8f9b1b1c6c325604e10e91f09c06b26f45f002fa62fa96185aa27a/userdata/shm major:0 minor:475 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ea6882d5a36974530033e9c50aa841cb8b79a3300b3854f2b7d0678f67c4f1bf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ea6882d5a36974530033e9c50aa841cb8b79a3300b3854f2b7d0678f67c4f1bf/userdata/shm major:0 minor:63 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f81c411903140f1ed67af182269cee687c3cf33776c637366fe64b8e9cc8279e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f81c411903140f1ed67af182269cee687c3cf33776c637366fe64b8e9cc8279e/userdata/shm major:0 minor:258 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f98590df5fb100e44d681ee1b32da7aae204b0a80ffd37a0aa1296d9ed5c3ed5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f98590df5fb100e44d681ee1b32da7aae204b0a80ffd37a0aa1296d9ed5c3ed5/userdata/shm major:0 minor:1072 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fcb70fadbcfc61d48c1e2b4ec06918e00580889e40004adc7bcefac11baf1ceb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fcb70fadbcfc61d48c1e2b4ec06918e00580889e40004adc7bcefac11baf1ceb/userdata/shm major:0 minor:109 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fce4e249fbb76d05fe14f32edfd62297db6230d70d6e19d6ad7a50ec7970b217/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fce4e249fbb76d05fe14f32edfd62297db6230d70d6e19d6ad7a50ec7970b217/userdata/shm major:0 minor:775 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fd3388055ed633bef8e022a8b09742a25d6085b3bb671bd2342375ed6f18da63/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fd3388055ed633bef8e022a8b09742a25d6085b3bb671bd2342375ed6f18da63/userdata/shm major:0 minor:278 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ff2fe1000ac68ef028cd1879d1c2cf197302bdbc2feff6610b1fe10df6c0bb6d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ff2fe1000ac68ef028cd1879d1c2cf197302bdbc2feff6610b1fe10df6c0bb6d/userdata/shm major:0 minor:51 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/09269324-c908-474d-818f-5cd49406f1e2/volumes/kubernetes.io~projected/kube-api-access-94zpt:{mountpoint:/var/lib/kubelet/pods/09269324-c908-474d-818f-5cd49406f1e2/volumes/kubernetes.io~projected/kube-api-access-94zpt major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/09269324-c908-474d-818f-5cd49406f1e2/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/09269324-c908-474d-818f-5cd49406f1e2/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:682 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0f6a7f55-84bd-4ea5-8248-4cb565904c3b/volumes/kubernetes.io~projected/kube-api-access-lnfwv:{mountpoint:/var/lib/kubelet/pods/0f6a7f55-84bd-4ea5-8248-4cb565904c3b/volumes/kubernetes.io~projected/kube-api-access-lnfwv major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0f6a7f55-84bd-4ea5-8248-4cb565904c3b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0f6a7f55-84bd-4ea5-8248-4cb565904c3b/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0f9ba06c-7a6b-4f46-a747-80b0a0b58600/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/0f9ba06c-7a6b-4f46-a747-80b0a0b58600/volumes/kubernetes.io~projected/kube-api-access major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0f9ba06c-7a6b-4f46-a747-80b0a0b58600/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0f9ba06c-7a6b-4f46-a747-80b0a0b58600/volumes/kubernetes.io~secret/serving-cert major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/14489ef7-8df3-4a3b-a137-3a78e89d425b/volumes/kubernetes.io~projected/kube-api-access-n76wp:{mountpoint:/var/lib/kubelet/pods/14489ef7-8df3-4a3b-a137-3a78e89d425b/volumes/kubernetes.io~projected/kube-api-access-n76wp major:0 minor:770 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/14489ef7-8df3-4a3b-a137-3a78e89d425b/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/14489ef7-8df3-4a3b-a137-3a78e89d425b/volumes/kubernetes.io~secret/certs major:0 minor:768 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/14489ef7-8df3-4a3b-a137-3a78e89d425b/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/14489ef7-8df3-4a3b-a137-3a78e89d425b/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:769 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15798f4d-8bcc-4e24-bb18-8dff1f4edf59/volumes/kubernetes.io~projected/kube-api-access-m2mwd:{mountpoint:/var/lib/kubelet/pods/15798f4d-8bcc-4e24-bb18-8dff1f4edf59/volumes/kubernetes.io~projected/kube-api-access-m2mwd major:0 minor:1067 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15798f4d-8bcc-4e24-bb18-8dff1f4edf59/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/15798f4d-8bcc-4e24-bb18-8dff1f4edf59/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:1065 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15798f4d-8bcc-4e24-bb18-8dff1f4edf59/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/15798f4d-8bcc-4e24-bb18-8dff1f4edf59/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:1070 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15b6612f-3a51-4a67-a566-8c520f85c6c2/volumes/kubernetes.io~projected/kube-api-access-dcfrf:{mountpoint:/var/lib/kubelet/pods/15b6612f-3a51-4a67-a566-8c520f85c6c2/volumes/kubernetes.io~projected/kube-api-access-dcfrf major:0 minor:487 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15b6612f-3a51-4a67-a566-8c520f85c6c2/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/15b6612f-3a51-4a67-a566-8c520f85c6c2/volumes/kubernetes.io~secret/encryption-config major:0 minor:483 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15b6612f-3a51-4a67-a566-8c520f85c6c2/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/15b6612f-3a51-4a67-a566-8c520f85c6c2/volumes/kubernetes.io~secret/etcd-client major:0 minor:486 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/15b6612f-3a51-4a67-a566-8c520f85c6c2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/15b6612f-3a51-4a67-a566-8c520f85c6c2/volumes/kubernetes.io~secret/serving-cert major:0 minor:524 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/17b1447b-1659-405b-81e0-21f0cf3e7a2c/volumes/kubernetes.io~projected/kube-api-access-rd8zs:{mountpoint:/var/lib/kubelet/pods/17b1447b-1659-405b-81e0-21f0cf3e7a2c/volumes/kubernetes.io~projected/kube-api-access-rd8zs major:0 minor:970 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1c322813-b574-4b46-b760-208ccecd01a5/volumes/kubernetes.io~projected/kube-api-access-9fbs4:{mountpoint:/var/lib/kubelet/pods/1c322813-b574-4b46-b760-208ccecd01a5/volumes/kubernetes.io~projected/kube-api-access-9fbs4 major:0 minor:732 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1deb139f-1903-417e-835c-28abdd156cdb/volumes/kubernetes.io~projected/kube-api-access-dkmb4:{mountpoint:/var/lib/kubelet/pods/1deb139f-1903-417e-835c-28abdd156cdb/volumes/kubernetes.io~projected/kube-api-access-dkmb4 major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1deb139f-1903-417e-835c-28abdd156cdb/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/1deb139f-1903-417e-835c-28abdd156cdb/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:488 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1deb139f-1903-417e-835c-28abdd156cdb/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/1deb139f-1903-417e-835c-28abdd156cdb/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:490 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac/volumes/kubernetes.io~projected/kube-api-access major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac/volumes/kubernetes.io~secret/serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/25781967-12ce-490e-94aa-9b9722f495da/volumes/kubernetes.io~projected/kube-api-access-z5cgw:{mountpoint:/var/lib/kubelet/pods/25781967-12ce-490e-94aa-9b9722f495da/volumes/kubernetes.io~projected/kube-api-access-z5cgw major:0 minor:754 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/25781967-12ce-490e-94aa-9b9722f495da/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/25781967-12ce-490e-94aa-9b9722f495da/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:744 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2a864188-ada6-4ec2-bf9f-72dab210f0ce/volumes/kubernetes.io~projected/kube-api-access-csfl2:{mountpoint:/var/lib/kubelet/pods/2a864188-ada6-4ec2-bf9f-72dab210f0ce/volumes/kubernetes.io~projected/kube-api-access-csfl2 major:0 minor:761 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2a864188-ada6-4ec2-bf9f-72dab210f0ce/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/2a864188-ada6-4ec2-bf9f-72dab210f0ce/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:750 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2b59dbf5-0a61-4981-aed3-e73550615c4a/volumes/kubernetes.io~projected/kube-api-access-nqgbr:{mountpoint:/var/lib/kubelet/pods/2b59dbf5-0a61-4981-aed3-e73550615c4a/volumes/kubernetes.io~projected/kube-api-access-nqgbr major:0 minor:1066 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2b59dbf5-0a61-4981-aed3-e73550615c4a/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/2b59dbf5-0a61-4981-aed3-e73550615c4a/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:1064 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2b59dbf5-0a61-4981-aed3-e73550615c4a/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/2b59dbf5-0a61-4981-aed3-e73550615c4a/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:1069 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2d0da6e3-3887-4361-8eae-e7447f9ff72c/volumes/kubernetes.io~projected/kube-api-access-xkw45:{mountpoint:/var/lib/kubelet/pods/2d0da6e3-3887-4361-8eae-e7447f9ff72c/volumes/kubernetes.io~projected/kube-api-access-xkw45 major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2d0da6e3-3887-4361-8eae-e7447f9ff72c/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/2d0da6e3-3887-4361-8eae-e7447f9ff72c/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:650 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3898c28b-69b0-46af-b085-37e12d7d80ba/volumes/kubernetes.io~projected/kube-api-access-z98qs:{mountpoint:/var/lib/kubelet/pods/3898c28b-69b0-46af-b085-37e12d7d80ba/volumes/kubernetes.io~projected/kube-api-access-z98qs major:0 minor:758 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3898c28b-69b0-46af-b085-37e12d7d80ba/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/3898c28b-69b0-46af-b085-37e12d7d80ba/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:755 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/411d544f-e105-44f0-927a-f61406b3f070/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/411d544f-e105-44f0-927a-f61406b3f070/volumes/kubernetes.io~projected/ca-certs major:0 minor:443 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/411d544f-e105-44f0-927a-f61406b3f070/volumes/kubernetes.io~projected/kube-api-access-t4l97:{mountpoint:/var/lib/kubelet/pods/411d544f-e105-44f0-927a-f61406b3f070/volumes/kubernetes.io~projected/kube-api-access-t4l97 major:0 minor:439 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/411d544f-e105-44f0-927a-f61406b3f070/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/411d544f-e105-44f0-927a-f61406b3f070/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:438 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4192ea44-a38c-4b70-93c3-8070da2ffe2f/volumes/kubernetes.io~projected/kube-api-access-gp84d:{mountpoint:/var/lib/kubelet/pods/4192ea44-a38c-4b70-93c3-8070da2ffe2f/volumes/kubernetes.io~projected/kube-api-access-gp84d major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4192ea44-a38c-4b70-93c3-8070da2ffe2f/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/4192ea44-a38c-4b70-93c3-8070da2ffe2f/volumes/kubernetes.io~secret/metrics-tls major:0 minor:493 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4e919445-81d0-4663-8941-f596d8121305/volumes/kubernetes.io~projected/kube-api-access-kwp9m:{mountpoint:/var/lib/kubelet/pods/4e919445-81d0-4663-8941-f596d8121305/volumes/kubernetes.io~projected/kube-api-access-kwp9m major:0 minor:415 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/50a2c23f-26af-4c7f-8ea6-996bcfe173d0/volumes/kubernetes.io~projected/kube-api-access-2jcqf:{mountpoint:/var/lib/kubelet/pods/50a2c23f-26af-4c7f-8ea6-996bcfe173d0/volumes/kubernetes.io~projected/kube-api-access-2jcqf major:0 minor:751 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/50a2c23f-26af-4c7f-8ea6-996bcfe173d0/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/50a2c23f-26af-4c7f-8ea6-996bcfe173d0/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:743 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/50a2c23f-26af-4c7f-8ea6-996bcfe173d0/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/50a2c23f-26af-4c7f-8ea6-996bcfe173d0/volumes/kubernetes.io~secret/webhook-cert major:0 minor:746 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/57affd8b-d1ce-40d2-b31e-7b18645ca7b6/volumes/kubernetes.io~projected/kube-api-access-fnzhn:{mountpoint:/var/lib/kubelet/pods/57affd8b-d1ce-40d2-b31e-7b18645ca7b6/volumes/kubernetes.io~projected/kube-api-access-fnzhn major:0 minor:140 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/57affd8b-d1ce-40d2-b31e-7b18645ca7b6/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/57affd8b-d1ce-40d2-b31e-7b18645ca7b6/volumes/kubernetes.io~secret/webhook-cert major:0 minor:141 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/599418d3-6afa-46ab-9afa-659134f7ac94/volumes/kubernetes.io~projected/kube-api-access-774fx:{mountpoint:/var/lib/kubelet/pods/599418d3-6afa-46ab-9afa-659134f7ac94/volumes/kubernetes.io~projected/kube-api-access-774fx major:0 minor:1068 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/599418d3-6afa-46ab-9afa-659134f7ac94/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/599418d3-6afa-46ab-9afa-659134f7ac94/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:1060 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/599418d3-6afa-46ab-9afa-659134f7ac94/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/599418d3-6afa-46ab-9afa-659134f7ac94/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:1071 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5f827195-f68d-4bd2-865b-a1f041a5c73e/volumes/kubernetes.io~projected/kube-api-access-jndvw:{mountpoint:/var/lib/kubelet/pods/5f827195-f68d-4bd2-865b-a1f041a5c73e/volumes/kubernetes.io~projected/kube-api-access-jndvw major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5f827195-f68d-4bd2-865b-a1f041a5c73e/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/5f827195-f68d-4bd2-865b-a1f041a5c73e/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/600c92a1-56c5-497b-a8f0-746830f4180e/volumes/kubernetes.io~projected/kube-api-access-m9mh7:{mountpoint:/var/lib/kubelet/pods/600c92a1-56c5-497b-a8f0-746830f4180e/volumes/kubernetes.io~projected/kube-api-access-m9mh7 major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/65cff83a-8d8f-4e4f-96ef-99941c29ba53/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/65cff83a-8d8f-4e4f-96ef-99941c29ba53/volumes/kubernetes.io~projected/kube-api-access major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/65cff83a-8d8f-4e4f-96ef-99941c29ba53/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/65cff83a-8d8f-4e4f-96ef-99941c29ba53/volumes/kubernetes.io~secret/serving-cert major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/680006ef-a955-491e-b6a3-1ca7fcc20165/volumes/kubernetes.io~projected/kube-api-access-kkfms:{mountpoint:/var/lib/kubelet/pods/680006ef-a955-491e-b6a3-1ca7fcc20165/volumes/kubernetes.io~projected/kube-api-access-kkfms major:0 minor:395 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/680006ef-a955-491e-b6a3-1ca7fcc20165/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/680006ef-a955-491e-b6a3-1ca7fcc20165/volumes/kubernetes.io~secret/signing-key major:0 minor:390 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c56e1ac-8752-4e46-8692-93716087f0e0/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/6c56e1ac-8752-4e46-8692-93716087f0e0/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c56e1ac-8752-4e46-8692-93716087f0e0/volumes/kubernetes.io~projected/kube-api-access-rppm6:{mountpoint:/var/lib/kubelet/pods/6c56e1ac-8752-4e46-8692-93716087f0e0/volumes/kubernetes.io~projected/kube-api-access-rppm6 major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6c56e1ac-8752-4e46-8692-93716087f0e0/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/6c56e1ac-8752-4e46-8692-93716087f0e0/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:491 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6e869b45-8ca6-485f-8b6f-b2fad3b02efe/volumes/kubernetes.io~projected/kube-api-access-xjv4l:{mountpoint:/var/lib/kubelet/pods/6e869b45-8ca6-485f-8b6f-b2fad3b02efe/volumes/kubernetes.io~projected/kube-api-access-xjv4l major:0 minor:517 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6e869b45-8ca6-485f-8b6f-b2fad3b02efe/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6e869b45-8ca6-485f-8b6f-b2fad3b02efe/volumes/kubernetes.io~secret/serving-cert major:0 minor:455 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b7ac7ef-060f-45d2-8988-006d45402e00/volumes/kubernetes.io~projected/kube-api-access-qkx4s:{mountpoint:/var/lib/kubelet/pods/7b7ac7ef-060f-45d2-8988-006d45402e00/volumes/kubernetes.io~projected/kube-api-access-qkx4s major:0 minor:503 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b7ac7ef-060f-45d2-8988-006d45402e00/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/7b7ac7ef-060f-45d2-8988-006d45402e00/volumes/kubernetes.io~secret/serving-cert major:0 minor:456 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7cac1300-44c1-4a7d-8d14-efa9702ad9df/volumes/kubernetes.io~projected/kube-api-access-7dn5k:{mountpoint:/var/lib/kubelet/pods/7cac1300-44c1-4a7d-8d14-efa9702ad9df/volumes/kubernetes.io~projected/kube-api-access-7dn5k major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7cac1300-44c1-4a7d-8d14-efa9702ad9df/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/7cac1300-44c1-4a7d-8d14-efa9702ad9df/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac/volumes/kubernetes.io~projected/kube-api-access-77sfj:{mountpoint:/var/lib/kubelet/pods/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac/volumes/kubernetes.io~projected/kube-api-access-77sfj major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac/volumes/kubernetes.io~secret/webhook-certs major:0 minor:681 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/800297fe-77fd-4f58-ade2-32a147cd7d5c/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/800297fe-77fd-4f58-ade2-32a147cd7d5c/volumes/kubernetes.io~projected/ca-certs major:0 minor:422 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/800297fe-77fd-4f58-ade2-32a147cd7d5c/volumes/kubernetes.io~projected/kube-api-access-tw5zj:{mountpoint:/var/lib/kubelet/pods/800297fe-77fd-4f58-ade2-32a147cd7d5c/volumes/kubernetes.io~projected/kube-api-access-tw5zj major:0 minor:423 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/81eefe1b-f683-4740-8fb0-0a5050f9b4a4/volumes/kubernetes.io~projected/kube-api-access-qkkcv:{mountpoint:/var/lib/kubelet/pods/81eefe1b-f683-4740-8fb0-0a5050f9b4a4/volumes/kubernetes.io~projected/kube-api-access-qkkcv major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/81eefe1b-f683-4740-8fb0-0a5050f9b4a4/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/81eefe1b-f683-4740-8fb0-0a5050f9b4a4/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8683c8c6-3a77-4b46-8898-142f9781b49c/volumes/kubernetes.io~projected/kube-api-access-g42f4:{mountpoint:/var/lib/kubelet/pods/8683c8c6-3a77-4b46-8898-142f9781b49c/volumes/kubernetes.io~projected/kube-api-access-g42f4 major:0 minor:1016 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8683c8c6-3a77-4b46-8898-142f9781b49c/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/8683c8c6-3a77-4b46-8898-142f9781b49c/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:980 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8683c8c6-3a77-4b46-8898-142f9781b49c/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/8683c8c6-3a77-4b46-8898-142f9781b49c/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:1015 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/87381a51-96e6-4e86-bdae-c8ac3fc7a039/volumes/kubernetes.io~projected/kube-api-access-brzfx:{mountpoint:/var/lib/kubelet/pods/87381a51-96e6-4e86-bdae-c8ac3fc7a039/volumes/kubernetes.io~projected/kube-api-access-brzfx major:0 minor:1129 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/87381a51-96e6-4e86-bdae-c8ac3fc7a039/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/87381a51-96e6-4e86-bdae-c8ac3fc7a039/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/87381a51-96e6-4e86-bdae-c8ac3fc7a039/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/87381a51-96e6-4e86-bdae-c8ac3fc7a039/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:1128 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/87381a51-96e6-4e86-bdae-c8ac3fc7a039/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/87381a51-96e6-4e86-bdae-c8ac3fc7a039/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b/volumes/kubernetes.io~projected/kube-api-access-wxgx6:{mountpoint:/var/lib/kubelet/pods/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b/volumes/kubernetes.io~projected/kube-api-access-wxgx6 major:0 minor:94 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8dacdedc-c6ad-40d4-afdc-59a31be417fe/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/8dacdedc-c6ad-40d4-afdc-59a31be417fe/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8dacdedc-c6ad-40d4-afdc-59a31be417fe/volumes/kubernetes.io~projected/kube-api-access-g97kq:{mountpoint:/var/lib/kubelet/pods/8dacdedc-c6ad-40d4-afdc-59a31be417fe/volumes/kubernetes.io~projected/kube-api-access-g97kq major:0 minor:168 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8dacdedc-c6ad-40d4-afdc-59a31be417fe/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/8dacdedc-c6ad-40d4-afdc-59a31be417fe/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:166 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/93cb5ef1-e8f1-4d11-8c93-1abf24626176/volumes/kubernetes.io~projected/kube-api-access-xt64s:{mountpoint:/var/lib/kubelet/pods/93cb5ef1-e8f1-4d11-8c93-1abf24626176/volumes/kubernetes.io~projected/kube-api-access-xt64s major:0 minor:971 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/93cb5ef1-e8f1-4d11-8c93-1abf24626176/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/93cb5ef1-e8f1-4d11-8c93-1abf24626176/volumes/kubernetes.io~secret/default-certificate major:0 minor:966 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/93cb5ef1-e8f1-4d11-8c93-1abf24626176/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/93cb5ef1-e8f1-4d11-8c93-1abf24626176/volumes/kubernetes.io~secret/metrics-certs major:0 minor:965 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/93cb5ef1-e8f1-4d11-8c93-1abf24626176/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/93cb5ef1-e8f1-4d11-8c93-1abf24626176/volumes/kubernetes.io~secret/stats-auth major:0 minor:961 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/94e2a8f0-2c2e-43da-9fa9-69edfcd77830/volumes/kubernetes.io~projected/kube-api-access-mr9zx:{mountpoint:/var/lib/kubelet/pods/94e2a8f0-2c2e-43da-9fa9-69edfcd77830/volumes/kubernetes.io~projected/kube-api-access-mr9zx major:0 minor:737 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/94e2a8f0-2c2e-43da-9fa9-69edfcd77830/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/94e2a8f0-2c2e-43da-9fa9-69edfcd77830/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:374 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/95143c61-6f91-4cd4-9411-31c2fb75d4d0/volumes/kubernetes.io~projected/kube-api-access-8t9rq:{mountpoint:/var/lib/kubelet/pods/95143c61-6f91-4cd4-9411-31c2fb75d4d0/volumes/kubernetes.io~projected/kube-api-access-8t9rq major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/95143c61-6f91-4cd4-9411-31c2fb75d4d0/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/95143c61-6f91-4cd4-9411-31c2fb75d4d0/volumes/kubernetes.io~secret/serving-cert major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/995ec82c-b593-416a-9287-6020a484855c/volumes/kubernetes.io~projected/kube-api-access-4q4k8:{mountpoint:/var/lib/kubelet/pods/995ec82c-b593-416a-9287-6020a484855c/volumes/kubernetes.io~projected/kube-api-access-4q4k8 major:0 minor:752 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9cc640bf-cb5f-4493-b47b-6ea6f524525e/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/9cc640bf-cb5f-4493-b47b-6ea6f524525e/volumes/kubernetes.io~projected/kube-api-access major:0 minor Mar 18 09:03:32.599811 master-0 kubenswrapper[26053]: :551 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9cc640bf-cb5f-4493-b47b-6ea6f524525e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/9cc640bf-cb5f-4493-b47b-6ea6f524525e/volumes/kubernetes.io~secret/serving-cert major:0 minor:549 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd/volumes/kubernetes.io~projected/kube-api-access-j5nwv:{mountpoint:/var/lib/kubelet/pods/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd/volumes/kubernetes.io~projected/kube-api-access-j5nwv major:0 minor:457 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd/volumes/kubernetes.io~secret/cert major:0 minor:458 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a0cd1cf7-be6f-4baf-8761-69c693476de9/volumes/kubernetes.io~projected/kube-api-access-2ggjn:{mountpoint:/var/lib/kubelet/pods/a0cd1cf7-be6f-4baf-8761-69c693476de9/volumes/kubernetes.io~projected/kube-api-access-2ggjn major:0 minor:759 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a0cd1cf7-be6f-4baf-8761-69c693476de9/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/a0cd1cf7-be6f-4baf-8761-69c693476de9/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:748 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1f2b373-0c85-4028-9089-9e9dff5d37b5/volumes/kubernetes.io~projected/kube-api-access-lczj8:{mountpoint:/var/lib/kubelet/pods/a1f2b373-0c85-4028-9089-9e9dff5d37b5/volumes/kubernetes.io~projected/kube-api-access-lczj8 major:0 minor:480 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1f2b373-0c85-4028-9089-9e9dff5d37b5/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/a1f2b373-0c85-4028-9089-9e9dff5d37b5/volumes/kubernetes.io~secret/encryption-config major:0 minor:473 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1f2b373-0c85-4028-9089-9e9dff5d37b5/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/a1f2b373-0c85-4028-9089-9e9dff5d37b5/volumes/kubernetes.io~secret/etcd-client major:0 minor:478 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a1f2b373-0c85-4028-9089-9e9dff5d37b5/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/a1f2b373-0c85-4028-9089-9e9dff5d37b5/volumes/kubernetes.io~secret/serving-cert major:0 minor:479 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a7cf2cff-ca67-4cc6-99e7-99478ab89af4/volumes/kubernetes.io~projected/kube-api-access-vhdc2:{mountpoint:/var/lib/kubelet/pods/a7cf2cff-ca67-4cc6-99e7-99478ab89af4/volumes/kubernetes.io~projected/kube-api-access-vhdc2 major:0 minor:852 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a7cf2cff-ca67-4cc6-99e7-99478ab89af4/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/a7cf2cff-ca67-4cc6-99e7-99478ab89af4/volumes/kubernetes.io~secret/proxy-tls major:0 minor:842 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/af1fbcf2-d4de-4015-89fc-2565e855a04d/volumes/kubernetes.io~projected/kube-api-access-r5svd:{mountpoint:/var/lib/kubelet/pods/af1fbcf2-d4de-4015-89fc-2565e855a04d/volumes/kubernetes.io~projected/kube-api-access-r5svd major:0 minor:105 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b2588f5c-327c-49cc-8cfb-0cce1ad758d5/volumes/kubernetes.io~projected/kube-api-access-9mkcq:{mountpoint:/var/lib/kubelet/pods/b2588f5c-327c-49cc-8cfb-0cce1ad758d5/volumes/kubernetes.io~projected/kube-api-access-9mkcq major:0 minor:597 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b2588f5c-327c-49cc-8cfb-0cce1ad758d5/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/b2588f5c-327c-49cc-8cfb-0cce1ad758d5/volumes/kubernetes.io~secret/metrics-tls major:0 minor:596 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd/volumes/kubernetes.io~projected/kube-api-access-nmv75:{mountpoint:/var/lib/kubelet/pods/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd/volumes/kubernetes.io~projected/kube-api-access-nmv75 major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd/volumes/kubernetes.io~secret/serving-cert major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bb6ef4c4-bff3-4559-8e42-582bbd668b7c/volumes/kubernetes.io~projected/kube-api-access-f2mj5:{mountpoint:/var/lib/kubelet/pods/bb6ef4c4-bff3-4559-8e42-582bbd668b7c/volumes/kubernetes.io~projected/kube-api-access-f2mj5 major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bb6ef4c4-bff3-4559-8e42-582bbd668b7c/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/bb6ef4c4-bff3-4559-8e42-582bbd668b7c/volumes/kubernetes.io~secret/etcd-client major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bb6ef4c4-bff3-4559-8e42-582bbd668b7c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/bb6ef4c4-bff3-4559-8e42-582bbd668b7c/volumes/kubernetes.io~secret/serving-cert major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be2682e4-cb63-4102-a83e-ef28023e273a/volumes/kubernetes.io~projected/kube-api-access-nmztj:{mountpoint:/var/lib/kubelet/pods/be2682e4-cb63-4102-a83e-ef28023e273a/volumes/kubernetes.io~projected/kube-api-access-nmztj major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be2682e4-cb63-4102-a83e-ef28023e273a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/be2682e4-cb63-4102-a83e-ef28023e273a/volumes/kubernetes.io~secret/serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bef948b9-eef4-404b-9b49-6e4a2ceea73b/volumes/kubernetes.io~projected/kube-api-access-mnn98:{mountpoint:/var/lib/kubelet/pods/bef948b9-eef4-404b-9b49-6e4a2ceea73b/volumes/kubernetes.io~projected/kube-api-access-mnn98 major:0 minor:756 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bef948b9-eef4-404b-9b49-6e4a2ceea73b/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/bef948b9-eef4-404b-9b49-6e4a2ceea73b/volumes/kubernetes.io~secret/proxy-tls major:0 minor:745 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf5fd4cc-959e-4878-82e9-b0f90dba6553/volumes/kubernetes.io~projected/kube-api-access-r4jq4:{mountpoint:/var/lib/kubelet/pods/bf5fd4cc-959e-4878-82e9-b0f90dba6553/volumes/kubernetes.io~projected/kube-api-access-r4jq4 major:0 minor:734 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf7a3329-a04c-4b58-9364-b907c00cbe08/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/bf7a3329-a04c-4b58-9364-b907c00cbe08/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf7a3329-a04c-4b58-9364-b907c00cbe08/volumes/kubernetes.io~projected/kube-api-access-2plvj:{mountpoint:/var/lib/kubelet/pods/bf7a3329-a04c-4b58-9364-b907c00cbe08/volumes/kubernetes.io~projected/kube-api-access-2plvj major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bf7a3329-a04c-4b58-9364-b907c00cbe08/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/bf7a3329-a04c-4b58-9364-b907c00cbe08/volumes/kubernetes.io~secret/metrics-tls major:0 minor:489 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c00ee838-424f-482b-942f-08f0952a5ccd/volumes/kubernetes.io~projected/kube-api-access-9w4w9:{mountpoint:/var/lib/kubelet/pods/c00ee838-424f-482b-942f-08f0952a5ccd/volumes/kubernetes.io~projected/kube-api-access-9w4w9 major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c00ee838-424f-482b-942f-08f0952a5ccd/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/c00ee838-424f-482b-942f-08f0952a5ccd/volumes/kubernetes.io~secret/srv-cert major:0 minor:638 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c5c995cf-40a0-4cd6-87fa-96a522f7bc57/volumes/kubernetes.io~projected/kube-api-access-rm2rc:{mountpoint:/var/lib/kubelet/pods/c5c995cf-40a0-4cd6-87fa-96a522f7bc57/volumes/kubernetes.io~projected/kube-api-access-rm2rc major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c5e43736-33c3-4949-98ca-971332541d64/volumes/kubernetes.io~projected/kube-api-access-sqjsq:{mountpoint:/var/lib/kubelet/pods/c5e43736-33c3-4949-98ca-971332541d64/volumes/kubernetes.io~projected/kube-api-access-sqjsq major:0 minor:600 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c6176328-5931-405b-8519-8e4bc83bedfb/volumes/kubernetes.io~projected/kube-api-access-5zx99:{mountpoint:/var/lib/kubelet/pods/c6176328-5931-405b-8519-8e4bc83bedfb/volumes/kubernetes.io~projected/kube-api-access-5zx99 major:0 minor:326 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ca9d4694-8675-47c5-819f-89bba9dcdc0f/volumes/kubernetes.io~projected/kube-api-access-rx9dd:{mountpoint:/var/lib/kubelet/pods/ca9d4694-8675-47c5-819f-89bba9dcdc0f/volumes/kubernetes.io~projected/kube-api-access-rx9dd major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ca9d4694-8675-47c5-819f-89bba9dcdc0f/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/ca9d4694-8675-47c5-819f-89bba9dcdc0f/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:679 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cda44dd8-895a-4eab-bedc-83f38efa2482/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/cda44dd8-895a-4eab-bedc-83f38efa2482/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:571 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cda44dd8-895a-4eab-bedc-83f38efa2482/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/cda44dd8-895a-4eab-bedc-83f38efa2482/volumes/kubernetes.io~empty-dir/tmp major:0 minor:579 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cda44dd8-895a-4eab-bedc-83f38efa2482/volumes/kubernetes.io~projected/kube-api-access-bxshz:{mountpoint:/var/lib/kubelet/pods/cda44dd8-895a-4eab-bedc-83f38efa2482/volumes/kubernetes.io~projected/kube-api-access-bxshz major:0 minor:580 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cdcd27a4-6d46-47af-a14a-65f6501c10f0/volumes/kubernetes.io~projected/kube-api-access-dfrbj:{mountpoint:/var/lib/kubelet/pods/cdcd27a4-6d46-47af-a14a-65f6501c10f0/volumes/kubernetes.io~projected/kube-api-access-dfrbj major:0 minor:753 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cdcd27a4-6d46-47af-a14a-65f6501c10f0/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/cdcd27a4-6d46-47af-a14a-65f6501c10f0/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:749 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cdf1c657-a9dc-455a-b2fd-27a518bc5199/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/cdf1c657-a9dc-455a-b2fd-27a518bc5199/volumes/kubernetes.io~secret/tls-certificates major:0 minor:967 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d7205eeb-912b-4c31-b08f-ed0b2a1319aa/volumes/kubernetes.io~projected/kube-api-access-ddsnb:{mountpoint:/var/lib/kubelet/pods/d7205eeb-912b-4c31-b08f-ed0b2a1319aa/volumes/kubernetes.io~projected/kube-api-access-ddsnb major:0 minor:948 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d7205eeb-912b-4c31-b08f-ed0b2a1319aa/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/d7205eeb-912b-4c31-b08f-ed0b2a1319aa/volumes/kubernetes.io~secret/proxy-tls major:0 minor:944 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e2af879e-1465-40bf-bf72-30c7e89386a3/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/e2af879e-1465-40bf-bf72-30c7e89386a3/volumes/kubernetes.io~projected/kube-api-access major:0 minor:1143 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e48101ca-f356-45e3-93d7-4e17b8d8066c/volumes/kubernetes.io~projected/kube-api-access-47cpd:{mountpoint:/var/lib/kubelet/pods/e48101ca-f356-45e3-93d7-4e17b8d8066c/volumes/kubernetes.io~projected/kube-api-access-47cpd major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e48101ca-f356-45e3-93d7-4e17b8d8066c/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/e48101ca-f356-45e3-93d7-4e17b8d8066c/volumes/kubernetes.io~secret/metrics-certs major:0 minor:683 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e86268c9-7a83-4ccb-979a-feff00cb4b3e/volumes/kubernetes.io~projected/kube-api-access-ptdsp:{mountpoint:/var/lib/kubelet/pods/e86268c9-7a83-4ccb-979a-feff00cb4b3e/volumes/kubernetes.io~projected/kube-api-access-ptdsp major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e86268c9-7a83-4ccb-979a-feff00cb4b3e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e86268c9-7a83-4ccb-979a-feff00cb4b3e/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e88b021c-c810-4a68-aa48-d8666b52330e/volumes/kubernetes.io~projected/kube-api-access-k22wv:{mountpoint:/var/lib/kubelet/pods/e88b021c-c810-4a68-aa48-d8666b52330e/volumes/kubernetes.io~projected/kube-api-access-k22wv major:0 minor:740 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e88b021c-c810-4a68-aa48-d8666b52330e/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/e88b021c-c810-4a68-aa48-d8666b52330e/volumes/kubernetes.io~secret/cert major:0 minor:719 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eb8f3615-9e89-4b51-87a2-7d168c81adf3/volumes/kubernetes.io~projected/kube-api-access-mj95l:{mountpoint:/var/lib/kubelet/pods/eb8f3615-9e89-4b51-87a2-7d168c81adf3/volumes/kubernetes.io~projected/kube-api-access-mj95l major:0 minor:736 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eb8f3615-9e89-4b51-87a2-7d168c81adf3/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/eb8f3615-9e89-4b51-87a2-7d168c81adf3/volumes/kubernetes.io~secret/cert major:0 minor:735 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eb8f3615-9e89-4b51-87a2-7d168c81adf3/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/eb8f3615-9e89-4b51-87a2-7d168c81adf3/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:733 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f198f770-5483-4499-abb6-06026f2c6b37/volumes/kubernetes.io~projected/kube-api-access-sk4w7:{mountpoint:/var/lib/kubelet/pods/f198f770-5483-4499-abb6-06026f2c6b37/volumes/kubernetes.io~projected/kube-api-access-sk4w7 major:0 minor:303 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2fcd92f-0a58-4c87-8213-715453486aca/volumes/kubernetes.io~projected/kube-api-access-zwnvl:{mountpoint:/var/lib/kubelet/pods/f2fcd92f-0a58-4c87-8213-715453486aca/volumes/kubernetes.io~projected/kube-api-access-zwnvl major:0 minor:738 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f6833a48-fccb-42bd-ac90-29f08d5bf7e8/volumes/kubernetes.io~projected/kube-api-access-257nx:{mountpoint:/var/lib/kubelet/pods/f6833a48-fccb-42bd-ac90-29f08d5bf7e8/volumes/kubernetes.io~projected/kube-api-access-257nx major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f6833a48-fccb-42bd-ac90-29f08d5bf7e8/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/f6833a48-fccb-42bd-ac90-29f08d5bf7e8/volumes/kubernetes.io~secret/srv-cert major:0 minor:680 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f918d08d-df7c-4e8d-85ba-1c92d766db16/volumes/kubernetes.io~projected/kube-api-access-l6p7s:{mountpoint:/var/lib/kubelet/pods/f918d08d-df7c-4e8d-85ba-1c92d766db16/volumes/kubernetes.io~projected/kube-api-access-l6p7s major:0 minor:757 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f918d08d-df7c-4e8d-85ba-1c92d766db16/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f918d08d-df7c-4e8d-85ba-1c92d766db16/volumes/kubernetes.io~secret/serving-cert major:0 minor:747 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fdb52116-9c55-4464-99c8-fc2e4559996b/volumes/kubernetes.io~projected/kube-api-access-xzrxv:{mountpoint:/var/lib/kubelet/pods/fdb52116-9c55-4464-99c8-fc2e4559996b/volumes/kubernetes.io~projected/kube-api-access-xzrxv major:0 minor:739 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fdb52116-9c55-4464-99c8-fc2e4559996b/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/fdb52116-9c55-4464-99c8-fc2e4559996b/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:720 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fdd2f1fd-1a94-4f4e-a275-b075f432f763/volumes/kubernetes.io~projected/kube-api-access-fqfdm:{mountpoint:/var/lib/kubelet/pods/fdd2f1fd-1a94-4f4e-a275-b075f432f763/volumes/kubernetes.io~projected/kube-api-access-fqfdm major:0 minor:118 fsType:tmpfs blockSize:0} overlay_0-1004:{mountpoint:/var/lib/containers/storage/overlay/1176983a6051babaf9ff7a70507b34d5e9988cd8919588226e6bc873da3f447b/merged major:0 minor:1004 fsType:overlay blockSize:0} overlay_0-101:{mountpoint:/var/lib/containers/storage/overlay/4c47e45992f82bd9fc61b04be52f443613e73f85006cb4b165d67f2196aea83a/merged major:0 minor:101 fsType:overlay blockSize:0} overlay_0-1025:{mountpoint:/var/lib/containers/storage/overlay/deb1961c53274f85fdfa7f9ff8246b55bcee8076ea43b4ccd3dce2807fcb5a80/merged major:0 minor:1025 fsType:overlay blockSize:0} overlay_0-1026:{mountpoint:/var/lib/containers/storage/overlay/9d1bae3aa16dd562f52901525835733aa2672e4ac7ae60aa247c8a1df8bc6219/merged major:0 minor:1026 fsType:overlay blockSize:0} overlay_0-103:{mountpoint:/var/lib/containers/storage/overlay/8d8efcf55379a4100d7208c661f085206aef74963806c7476ff42be27a92696f/merged major:0 minor:103 fsType:overlay blockSize:0} overlay_0-1034:{mountpoint:/var/lib/containers/storage/overlay/f4b54df9d0a8aeaf9624cc31291969c62c758d2f48b31a38a1b3353c0d173ac0/merged major:0 minor:1034 fsType:overlay blockSize:0} overlay_0-1036:{mountpoint:/var/lib/containers/storage/overlay/fd3367723b126ab22c05306f18a0f22617d73e25f9740a779a376cc7876ea0a9/merged major:0 minor:1036 fsType:overlay blockSize:0} overlay_0-1037:{mountpoint:/var/lib/containers/storage/overlay/42fbc91e04d39f298ba108c64223a6f8012e6fb3a871c8b0c3e5dd651a881d0a/merged major:0 minor:1037 fsType:overlay blockSize:0} overlay_0-1043:{mountpoint:/var/lib/containers/storage/overlay/b50cc4e4fa127f1820bd3fe965df578b069bc75be1e3c246f7a1b9ad67de8e04/merged major:0 minor:1043 fsType:overlay blockSize:0} overlay_0-1054:{mountpoint:/var/lib/containers/storage/overlay/5713f6e3e60cf27d0c131f26245abae595878d99efbb0efe9d4574abfb88ca3a/merged major:0 minor:1054 fsType:overlay blockSize:0} overlay_0-1076:{mountpoint:/var/lib/containers/storage/overlay/a78ce13acf1a0ecb219688b1ba895f75e67b212c33a1effc3c82d222eb473ebb/merged major:0 minor:1076 fsType:overlay blockSize:0} overlay_0-1080:{mountpoint:/var/lib/containers/storage/overlay/59f213d2a1bd00f79af5f88fd05e1a33cd9a1181c45e98f892a52086b086c11f/merged major:0 minor:1080 fsType:overlay blockSize:0} overlay_0-1082:{mountpoint:/var/lib/containers/storage/overlay/0f0cd4241be2a9b9376c1259481b97f255d4efe5e80f631b54bd2dcd14f693ec/merged major:0 minor:1082 fsType:overlay blockSize:0} overlay_0-1084:{mountpoint:/var/lib/containers/storage/overlay/8f8091dd0d69ab2103f77646bfc55a341a0c8b884e7c887057a2c4e11c824c4d/merged major:0 minor:1084 fsType:overlay blockSize:0} overlay_0-1086:{mountpoint:/var/lib/containers/storage/overlay/b147ad8145a668d6d40bb8062e8740aab43ec3d91df3443f589dc71cda11f23d/merged major:0 minor:1086 fsType:overlay blockSize:0} overlay_0-1092:{mountpoint:/var/lib/containers/storage/overlay/1b2a27a2d3e4acb113f876beeda0307e943707b457fbf570b4969b120cb29a34/merged major:0 minor:1092 fsType:overlay blockSize:0} overlay_0-1097:{mountpoint:/var/lib/containers/storage/overlay/1b51d8ea68e7d66a889e1c0329b1a163430ab2172f86dafd6a9b6b1aae0ced7c/merged major:0 minor:1097 fsType:overlay blockSize:0} overlay_0-1102:{mountpoint:/var/lib/containers/storage/overlay/965aa3f95e8f2d753c0981a4fb4fe915903f1858064d9f038cd72154539d296d/merged major:0 minor:1102 fsType:overlay blockSize:0} overlay_0-1104:{mountpoint:/var/lib/containers/storage/overlay/2083f49013d31ffbe6971d11da990309e687c459a6d7e8ebbe096941595fa774/merged major:0 minor:1104 fsType:overlay blockSize:0} overlay_0-1113:{mountpoint:/var/lib/containers/storage/overlay/2164d0fd6c32f17e56077c055b03c5e48a19f2bd8b68ecc6cd927ed1da85d941/merged major:0 minor:1113 fsType:overlay blockSize:0} overlay_0-1115:{mountpoint:/var/lib/containers/storage/overlay/482f45e09a9b8ec3e21d4f26f854c97a643c2e2beec9e574bbea1fd66e29a591/merged major:0 minor:1115 fsType:overlay blockSize:0} overlay_0-1132:{mountpoint:/var/lib/containers/storage/overlay/0d2b34787bc2252a0ab14831857f2d8785935e0f58fad283f472b2fe90a16d4e/merged major:0 minor:1132 fsType:overlay blockSize:0} overlay_0-1134:{mountpoint:/var/lib/containers/storage/overlay/d2da56154b64d74f3076da644df7cf8ec2dbf24db470f88fbc6742e055d87237/merged major:0 minor:1134 fsType:overlay blockSize:0} overlay_0-1141:{mountpoint:/var/lib/containers/storage/overlay/774e53a268aad40b9492b7bc78a5e1d8b4524b84bd305f736babbb536c00f2c7/merged major:0 minor:1141 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/7dc33dfb2a77c31b29b0bb54d2d282a115cd194ed938ef994cea975254202bb3/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-1160:{mountpoint:/var/lib/containers/storage/overlay/37e3ffb786b7639f32370b8fdd93b47aa0725ccdb1dbdf41f58fd89510dd9557/merged major:0 minor:1160 fsType:overlay blockSize:0} overlay_0-1162:{mountpoint:/var/lib/containers/storage/overlay/5dba4e6d84e3f211496a50976ddb598ab01d8a652b0e28684b9180f60a2f3618/merged major:0 minor:1162 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/1c6466b3628fc565136e0691e17ce5c46d7a0133cb5860fad1a0ca51f87710cd/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-126:{mountpoint:/var/lib/containers/storage/overlay/0d8bced82840a9f9232793a0678f33fec9473c217bab4b0586dfc496dd03649d/merged major:0 minor:126 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/fd8f02f1c530e2c8c730ebbd0d20a130e5e6d6a2c5fe3aa8207d53edf40ad82d/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/2c52541ce41297dde53f2ee5e42c61db071bdf528e3cda352f0bc772ecd71eef/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-138:{mountpoint:/var/lib/containers/storage/overlay/785a2543258fcfac11fc02e001bc0a5c78666939b0636940db03e285ef383217/merged major:0 minor:138 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/f010b2d60901e3cf4c0f602d88d6dab6b4feadfa717dbf721e7dde0ed06810be/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/e535255d072daec106c45655e69a81047f46d8f3bbf07d176ade8b17f39bbf80/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-162:{mountpoint:/var/lib/containers/storage/overlay/e398eb315bd3288d9dd8c9e63ce649fba4d300b3bf53a12b8ce06b045a0ecb14/merged major:0 minor:162 fsType:overlay blockSize:0} overlay_0-165:{mountpoint:/var/lib/containers/storage/overlay/df56dfeedfa55c90961cab02ccc3a9b1aa3954a1c7914c689d2c32cbcfebe423/merged major:0 minor:165 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/104f5eb7f9ffe4ce970eba484b8db25309358a79be39dd34f7a839dab2c56b60/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/543706edf07f3320d5cd38e6e5ca61a30302ad8252a080bf2635e0e1a5c63f07/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/80d2fb3dbde771e01246c44d90576409e83b30249c89b0510346f29557a8c336/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/d602aa7da6a39395e493db7a42692a0debdee6bb1b4b10910d9d16bc383eb04a/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/abdc16823e0755d0654995a3654a683b86460077380b982fc7135adb48e04154/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/82756b23d322695dbcbaf260593537cf6842c7d4f04e58a3cac96424d6bcce13/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/e235d4ec4b3544321e4cf270b88803b29cd98a620827c49a1989f2a088497284/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-199:{mountpoint:/var/lib/containers/storage/overlay/f226b950c0151f3df63dde124850a5d8d6cec8b0715a21c93700dd09b23486f8/merged major:0 minor:199 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/86628707f516630330e88157bb4daa0caf89f180360072e868b9813deb89bc02/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-267:{mountpoint:/var/lib/containers/storage/overlay/61ceae7dc64e24f4b055facb1c2117de17e6e83f4a9b462ff96469f2d19822a5/merged major:0 minor:267 fsType:overlay blockSize:0} overlay_0-269:{mountpoint:/var/lib/containers/storage/overlay/34e4ce1e5bf2b2306a27b6629fd355bb46d2e89cc5e8ea68e68396be1fbd1a03/merged major:0 minor:269 fsType:overlay blockSize:0} overlay_0-272:{mountpoint:/var/lib/containers/storage/overlay/7e958d9b28805626ece5258339e6f7740d66159b568cd33c035744020f42dd04/merged major:0 minor:272 fsType:overlay blockSize:0} overlay_0-281:{mountpoint:/var/lib/containers/storage/overlay/e85533abfb21e7efd7dda7407ac071c6bf9c7636093ce3be21187b402c7515ee/merged major:0 minor:281 fsType:overlay blockSize:0} overlay_0-283:{mountpoint:/var/lib/containers/storage/overlay/fa13797f8c0a37316cec34b029e6bbd39a7a34a19f44c335031ab6a0caead5e9/merged major:0 minor:283 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/1159ffd45426420252c966924c6674dbedebf495156aba3dd158c1a42996ad41/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/bb18310434caec036386994e64eb73d9e44ff1b076f60e12a8109091c24c3b5e/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/9e01d3cb6fdae6d1f0f8adfc8dd5c7e404a31c6b5332a7ec70cf7499626001df/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-291:{mountpoint:/var/lib/containers/storage/overlay/48dc29145c1cbff342048d47f991e442f5510e3c2a675475f02415a50b805b8d/merged major:0 minor:291 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/42e8fce422734819f1a754be3efac84d55c41d508b911c2777eadf3872014faf/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/c957d50d95dada146baeb76503e396143aae1aec0eb39ea39103eb81c6b8d0b1/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/665bf0b08231fae7a3a5e9a4091b95d34bdc882da29b27f8c997485edd416742/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/2839ae9c74f49fdc3da051bbab6acb476d3c1978d607028fae43a84898f53e3d/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/5c161f0b6046f7c1df35b0458d48c972ad1df1016de839f99a4f7c4432c19040/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/c4e95c8d06862ee05c04c1902ca5403ac14bd55265da0cc776544adf517e31c5/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-307:{mountpoint:/var/lib/containers/storage/overlay/b20dab18fb654101063ae0bf6cf1ecd5686592f12a7970e018b76442a8b75bfc/merged major:0 minor:307 fsType:overlay blockSize:0} overlay_0-310:{mountpoint:/var/lib/containers/storage/overlay/d96fa4de921b96cc5a51059780ffa5f1616f373b02d7806c41f632a25b0466bd/merged major:0 minor:310 fsType:overlay blockSize:0} overlay_0-312:{mountpoint:/var/lib/containers/storage/overlay/8faac6338bab2688d3b124f6fef01737a7614f568f526d12f50a0f758665b257/merged major:0 minor:312 fsType:overlay blockSize:0} overlay_0-315:{mountpoint:/var/lib/containers/storage/overlay/2dab67d58a32ab35621bdf93ca1260ec2efdd05c9f1e478411c319cb5d5a457b/merged major:0 minor:315 fsType:overlay blockSize:0} overlay_0-316:{mountpoint:/var/lib/containers/storage/overlay/cf4e764b9225219a2cdd68c6b4a871090eaadfb9fd1b78a8fd0874d5162766ac/merged major:0 minor:316 fsType:overlay blockSize:0} overlay_0-320:{mountpoint:/var/lib/containers/storage/overlay/aaa69fdbb05a54bbd04a061b09cdea21a267acc7e4637b11afd93630324a85d3/merged major:0 minor:320 fsType:overlay blockSize:0} overlay_0-325:{mountpoint:/var/lib/containers/storage/overlay/acce107f7b603d2f7cfa5d48e8e9b9429aab1874a9e2473a5ed45cbcb1503c66/merged major:0 minor:325 fsType:overlay blockSize:0} overlay_0-332:{mountpoint:/var/lib/containers/storage/overlay/2d28650910ca4b2c5983848ebece59a17e8a0c67684ad018e8b3b007ee7551a9/merged major:0 minor:332 fsType:overlay blockSize:0} overlay_0-335:{mountpoint:/var/lib/containers/storage/overlay/a41c6e2afb35ab1c61657d196167e61d2df2f9185bc53049f1072b162617651d/merged major:0 minor:335 fsType:overlay blockSize:0} overlay_0-341:{mountpoint:/var/lib/containers/storage/overlay/f24c29a781320c66a2bed915becf014592f2d0ccf4a40973c4ef8d27290a94a9/merged major:0 minor:341 fsType:overlay blockSize:0} overlay_0-343:{mountpoint:/var/lib/containers/storage/overlay/f6c1047f480ec2445eeb2b49453172d85ff102def10b75473f4bdda9610ab531/merged major:0 minor:343 fsType:overlay blockSize:0} overlay_0-345:{mountpoint:/var/lib/containers/storage/overlay/1ca617e30b5453ac3d50cbb695af34448bc4abf81bf3ab7bb6b723d1b996289e/merged major:0 minor:345 fsType:overlay blockSize:0} overlay_0-347:{mountpoint:/var/lib/containers/storage/overlay/a9234c98912445473d45bbe8beed7b5c184fdd771c98f65d411679843346ef0a/merged major:0 minor:347 fsType:overlay blockSize:0} overlay_0-348:{mountpoint:/var/lib/containers/storage/overlay/c3285f109f3340952a2808558685f69197ac7d3552505d5b57dd83f232f308e1/merged major:0 minor:348 fsType:overlay blockSize:0} overlay_0-359:{mountpoint:/var/lib/containers/storage/overlay/e83333ee5f5662a3dab79562f3b9307793d364e24f92aa2643fdb996a211313a/merged major:0 minor:359 fsType:overlay blockSize:0} overlay_0-364:{mountpoint:/var/lib/containers/storage/overlay/ef3081c2d9266d8694830b9036b248f13b37851be32776905290169fda33624c/merged major:0 minor:364 fsType:overlay blockSize:0} overlay_0-372:{mountpoint:/var/lib/containers/storage/overlay/47969ad7ed32fd40be33e0e42092821c9c086481dbb07887de62e08fe27f7a9b/merged major:0 minor:372 fsType:overlay blockSize:0} overlay_0-384:{mountpoint:/var/lib/containers/storage/overlay/4c94befa6a46c62e5670c8bf7ad0bf027b93c95dd5b97a8dddd54efbcd8b0afb/merged major:0 minor:384 fsType:overlay blockSize:0} overlay_0-388:{mountpoint:/var/lib/containers/storage/overlay/d47dd524dee2c2d40fd2cc176308e7cd0c51019fb2c2582bd3fcbf46df284b7d/merged major:0 minor:388 fsType:overlay blockSize:0} overlay_0-396:{mountpoint:/var/lib/containers/storage/overlay/46ca096cf4d8dd8ae740b1715bf13d7420a169de5abadabc88f96ed63011b096/merged major:0 minor:396 fsType:overlay blockSize:0} overlay_0-400:{mountpoint:/var/lib/containers/storage/overlay/08e65b6b89a272eee4cb73d8f5538aee8f0a98653692ba1bfc20e87325c5c743/merged major:0 minor:400 fsType:overlay blockSize:0} overlay_0-402:{mountpoint:/var/lib/containers/storage/overlay/2abf89c617d5a8b8587d87b932b0ae9222af6f4984ee93f0df04f52382370cc3/merged major:0 minor:402 fsType:overlay blockSize:0} overlay_0-403:{mountpoint:/var/lib/containers/storage/overlay/776638a62c5943a38afeac707496d617e81b42448d3bfa99b45c635eb6ca1ed1/merged major:0 minor:403 fsType:overlay blockSize:0} overlay_0-407:{mountpoint:/var/lib/containers/storage/overlay/b031bf555809ecf00b2bfb1b77ec50abd286d4f4e19a448684d733b155f99be0/merged major:0 minor:407 fsType:overlay blockSize:0} overlay_0-409:{mountpoint:/var/lib/containers/storage/overlay/334e769d8eab1a192e2acc844281aafcf94396dbe8a7566aa2a7cc467104d2b0/merged major:0 minor:409 fsType:overlay blockSize:0} overlay_0-411:{mountpoint:/var/lib/containers/storage/overlay/da64f932831a87bf02189d966946ff6667e06c0992d31f021e619d0a47fc10fa/merged major:0 minor:411 fsType:overlay blockSize:0} overlay_0-413:{mountpoint:/var/lib/containers/storage/overlay/22a5f15b0b5c2e485425534b922c95f5090312e5fa6b595bb27e96217d267486/merged major:0 minor:413 fsType:overlay blockSize:0} overlay_0-42:{mountpoint:/var/lib/containers/storage/overlay/14baf176e89aaa32acd83e45d794590fac90f29ef088415234a4abac626b41eb/merged major:0 minor:42 fsType:overlay blockSize:0} overlay_0-424:{mountpoint:/var/lib/containers/storage/overlay/6234e6a3130484672b55c3747d9a7fb16beab95b8388a9c4c49270c3081cffc9/merged major:0 minor:424 fsType:overlay blockSize:0} overlay_0-428:{mountpoint:/var/lib/containers/storage/overlay/376b5464caa3d0aa731e521be29a939d505c6e45b4bb29d0a74a55046a6a2634/merged major:0 minor:428 fsType:overlay blockSize:0} overlay_0-431:{mountpoint:/var/lib/containers/storage/overlay/7b4b2c3ed02d18b0f45745fd64a42d9981fdb2005f0344a062ebf1e2899223cc/merged major:0 minor:431 fsType:overlay blockSize:0} overlay_0-432:{mountpoint:/var/lib/containers/storage/overlay/59a651598307c0bec51cfd001fea7c541ba2d08eac898e0a0b255ccb3875a682/merged major:0 minor:432 fsType:overlay blockSize:0} overlay_0-434:{mountpoint:/var/lib/containers/storage/overlay/05c6b06653e25023a1834a24a703ffb92129e3ac1df0f08d2862db97c04825f9/merged major:0 minor:434 fsType:overlay blockSize:0} overlay_0-446:{mountpoint:/var/lib/containers/storage/overlay/58575e5d6c538a497817791c533736adf9e0ade139d69ff3cefeaf57fb0044fd/merged major:0 minor:446 fsType:overlay blockSize:0} overlay_0-448:{mountpoint:/var/lib/containers/storage/overlay/b8693c86327fb3dc63fbe9209c53d504f89314b44e9d00f0c8882120b5a0a9ec/merged major:0 minor:448 fsType:overlay blockSize:0} overlay_0-450:{mountpoint:/var/lib/containers/storage/overlay/a7535dc3824996332680e3d36b1dca6d608a9d08e819d092ecf7eeb21b6f6e3c/merged major:0 minor:450 fsType:overlay blockSize:0} overlay_0-459:{mountpoint:/var/lib/containers/storage/overlay/675aa113a30c0814fd59cfa37385a21ed2932142f7716abc63018761c53e3e9c/merged major:0 minor:459 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/containers/storage/overlay/33bae1441873e47d361a25e2ae65e85d9bb8e8c69e33f36ffea7bd5484698507/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-461:{mountpoint:/var/lib/containers/storage/overlay/1d59d3f445f3c56978099d8cdf2990cfdb56d1640259cc340612e59786827046/merged major:0 minor:461 fsType:overlay blockSize:0} overlay_0-462:{mountpoint:/var/lib/containers/storage/overlay/46046daaa8c1b1a7372711472248f1cb4a98b77b3ff3da7c3adc4d7ed8ade670/merged major:0 minor:462 fsType:overlay blockSize:0} overlay_0-477:{mountpoint:/var/lib/containers/storage/overlay/731b30d61939dfcfe5cf6f9105df271f4ac8d65ca03e38bc232933c97da9aadf/merged major:0 minor:477 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/93ed7a53000d7036838ed00c987466383ab2aadd7c0984d0eb19604b18eed0e4/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-484:{mountpoint:/var/lib/containers/storage/overlay/538642b7ee4cf5967ba5c9f1b7c34222cdb05353303f772970a45825fc7b033c/merged major:0 minor:484 fsType:overlay blockSize:0} overlay_0-49:{mountpoint:/var/lib/containers/storage/overlay/ef27398e46439471baea25da2ae1b922dfe103302654a17502d6ac9725277db8/merged major:0 minor:49 fsType:overlay blockSize:0} overlay_0-492:{mountpoint:/var/lib/containers/storage/overlay/bc0eb0cce3107d1058bd8288ea5e08d7a3d2ae799170ff2c77493a7cc0e13080/merged major:0 minor:492 fsType:overlay blockSize:0} overlay_0-50:{mountpoint:/var/lib/containers/storage/overlay/cc7e7598175c7d2b63668c77558200ccd9952166067b9cd3063c43332b2ad670/merged major:0 minor:50 fsType:overlay blockSize:0} overlay_0-504:{mountpoint:/var/lib/containers/storage/overlay/5aee04aaabf6bfb229598ce13be8cfb56b1288dd86048d8fdf801814d98b6e62/merged major:0 minor:504 fsType:overlay blockSize:0} overlay_0-506:{mountpoint:/var/lib/containers/storage/overlay/604f8b09fba76fc8c93e8a77f89c94277c61452d87a787dc34fa0773e4e7b4d4/merged major:0 minor:506 fsType:overlay blockSize:0} overlay_0-508:{mountpoint:/var/lib/containers/storage/overlay/4d027b5610a6cb03e8757ed1392c4ad1388e892c63436e74086939315152a647/merged major:0 minor:508 fsType:overlay blockSize:0} overlay_0-510:{mountpoint:/var/lib/containers/storage/overlay/28c5b2eab3b7743e44715db5b5720ccd0277081481d033a72244c7dfbbe382cf/merged major:0 minor:510 fsType:overlay blockSize:0} overlay_0-512:{mountpoint:/var/lib/containers/storage/overlay/0713708cb3fdaa7bb17f2999a1c1ceb9c0170585d20c0fbbfb63d8820736801f/merged major:0 minor:512 fsType:overlay blockSize:0} overlay_0-53:{mountpoint:/var/lib/containers/storage/overlay/bcf2b4a0bb61724c0da5661efcee02fb80eb0e4075b5bebdd8bb2ba533e54c62/merged major:0 minor:53 fsType:overlay blockSize:0} overlay_0-532:{mountpoint:/var/lib/containers/storage/overlay/cc735d7b6297e307d1c68b51fa371603fbe5a03446009bd82dfc52a07f58fe9e/merged major:0 minor:532 fsType:overlay blockSize:0} overlay_0-534:{mountpoint:/var/lib/containers/storage/overlay/7d0bdbca00555c0ec6b3b455c95773f149f3fae249866d426e2ad1d229c1bd0b/merged major:0 minor:534 fsType:overlay blockSize:0} overlay_0-539:{mountpoint:/var/lib/containers/storage/overlay/a813454e05871761171a285a33f3307925b2d1042656f7d73becc484122aef79/merged major:0 minor:539 fsType:overlay blockSize:0} overlay_0-547:{mountpoint:/var/lib/containers/storage/overlay/bf7e49f82d522676f5e034842efe28f5db50d641764e21e0116636d57a2515dd/merged major:0 minor:547 fsType:overlay blockSize:0} overlay_0-550:{mountpoint:/var/lib/containers/storage/overlay/44e9073ba4d62ed3faee5b0b179be4b70e95f05199495b3212831ffc22d88611/merged major:0 minor:550 fsType:overlay blockSize:0} overlay_0-566:{mountpoint:/var/lib/containers/storage/overlay/1ee3b165f98a534e77ea43ca3522e11b57fded9905debe636850001c177cea35/merged major:0 minor:566 fsType:overlay blockSize:0} overlay_0-569:{mountpoint:/var/lib/containers/storage/overlay/94d7808d5d14c6d2cfec45c67c2e2061f6c7f5e65c0eb6e0cdceef184ac86753/merged major:0 minor:569 fsType:overlay blockSize:0} overlay_0-58:{mountpoint:/var/lib/containers/storage/overlay/7dfbe2eaa24d81f776c1e4dc36f984a6d76bcfa2c789cfe2ed84ad9e4ff2600e/merged major:0 minor:58 fsType:overlay blockSize:0} overlay_0-581:{mountpoint:/var/lib/containers/storage/overlay/bbdba517aa538d918d8c2d297bf3df6668883d585415d65bccb6fed10982a0db/merged major:0 minor:581 fsType:overlay blockSize:0} overlay_0-583:{mountpoint:/var/lib/containers/storage/overlay/ded5acea9dccca85a4d287e4c0bc5072655eaa28e59d3905af96043bb502b1b9/merged major:0 minor:583 fsType:overlay blockSize:0} overlay_0-585:{mountpoint:/var/lib/containers/storage/overlay/9ec391e4f21798e86e5ffdfd021c3331509a589d80807a3a245870298a690234/merged major:0 minor:585 fsType:overlay blockSize:0} overlay_0-590:{mountpoint:/var/lib/containers/storage/overlay/a853d8479a5e9bf29a2687ec4e75dbef01416376e0704f131721634baac1cf48/merged major:0 minor:590 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/da78c30559d5b0fc50772740f4d04b8bd7b779c75bfc4be41ebf2335222a96e7/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-604:{mountpoint:/var/lib/containers/storage/overlay/90308222a940b966913db96b1bf77e58f41fe59e38d3f3b4fc18fec429c6ae1b/merged major:0 minor:604 fsType:overlay blockSize:0} overlay_0-608:{mountpoint:/var/lib/containers/storage/overlay/f47cbe6620b216357d952f60d8b5cefd2bfd56dcf225f99a5933a760e08f03bf/merged major:0 minor:608 fsType:overlay blockSize:0} overlay_0-610:{mountpoint:/var/lib/containers/storage/overlay/7b8beeccc71e2f6069c0aa72816e9baa46869f07fe65493ecf915dedfed51f06/merged major:0 minor:610 fsType:overlay blockSize:0} overlay_0-612:{mountpoint:/var/lib/containers/storage/overlay/607a6f43ed1dd0376fad237fcb0dc7e5318abd286c5e02fd214144160f0aeccb/merged major:0 minor:612 fsType:overlay blockSize:0} overlay_0-620:{mountpoint:/var/lib/containers/storage/overlay/1a0aab0d535041093c60b9b47beb7efcd369e18398828159d445d09338aca03a/merged major:0 minor:620 fsType:overlay blockSize:0} overlay_0-623:{mountpoint:/var/lib/containers/storage/overlay/c27b5b4f51f8e68225a100bbaaef71e12830ed71b98e0dd38084da2c7eab8915/merged major:0 minor:623 fsType:overlay blockSize:0} overlay_0-628:{mountpoint:/var/lib/containers/storage/overlay/b98dbfc424486608454ab87cd8af8e63e8df7837c27437f9f7e9fb0e0b57dacb/merged major:0 minor:628 fsType:overlay blockSize:0} overlay_0-632:{mountpoint:/var/lib/containers/storage/overlay/908fe11bc466ebf65e524f1ced213e50d2df169b68738eef0af991db905766fc/merged major:0 minor:632 fsType:overlay blockSize:0} overlay_0-633:{mountpoint:/var/lib/containers/storage/overlay/87a9b947d7f2896dda6236d692fb0f3224b5af4e9a1877209bb92b130a8841d8/merged major:0 minor:633 fsType:overlay blockSize:0} overlay_0-634:{mountpoint:/var/lib/containers/storage/overlay/a9980783c90efeca809572824ecd649619c6e9f5f6ff640138d64f7213cbf5e5/merged major:0 minor:634 fsType:overlay blockSize:0} overlay_0-641:{mountpoint:/var/lib/containers/storage/overlay/3be1bbfd447ed7de06b04455d8b21439dd26f303172bf02ca94c864b7085bbd3/merged major:0 minor:641 fsType:overlay blockSize:0} overlay_0-643:{mountpoint:/var/lib/containers/storage/overlay/5eb9471a65d177c6886028ee709976b8d0a347390e253fa77c8774d401f70d21/merged major:0 minor:643 fsType:overlay blockSize:0} overlay_0-653:{mountpoint:/var/lib/containers/storage/overlay/5d8cc6cdaa02415e217d4b70574517b1d04aa1b93221039cec092cde6d95227d/merged major:0 minor:653 fsType:overlay blockSize:0} overlay_0-655:{mountpoint:/var/lib/containers/storage/overlay/62021d73ed1dc2a30334d10b40234e0db98bee412966f0409de3d02a803e7f49/merged major:0 minor:655 fsType:overlay blockSize:0} overlay_0-657:{mountpoint:/var/lib/containers/storage/overlay/8e741d3eeb091b2af7128a4016ba946c0c62f5099591be0f3c42c6fd7def058e/merged major:0 minor:657 fsType:overlay blockSize:0} overlay_0-659:{mountpoint:/var/lib/containers/storage/overlay/83649c2abcb52abe198b1c1a52e271549ec6c688a456a31290be98e08c806ef0/merged major:0 minor:659 fsType:overlay blockSize:0} overlay_0-67:{mountpoint:/var/lib/containers/storage/overlay/201144ba492a00badb97697d055bc8411193be740935e84d6686a78e1753734d/merged major:0 minor:67 fsType:overlay blockSize:0} overlay_0-671:{mountpoint:/var/lib/containers/storage/overlay/4b5ed41adffaf5db6265462ac535dc691803ab79fdf64e02055be45673b20592/merged major:0 minor:671 fsType:overlay blockSize:0} overlay_0-673:{mountpoint:/var/lib/containers/storage/overlay/4eb9fb5c7ac39288296246726e983d73149a7d439eaf29c90b1538911db51324/merged major:0 minor:673 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/3560574ac507686473828c4ff58426dac8279f1335e5f9dc5181cffc669ad463/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-686:{mountpoint:/var/lib/containers/storage/overlay/4db84b67d1f285f6bc0f6eb81a03ceeaee4e77606dbc45e48fa80b5a93497f34/merged major:0 minor:686 fsType:overlay blockSize:0} overlay_0-704:{mountpoint:/var/lib/containers/storage/overlay/69a3c280180f6b28426c355e049dee3a65176dfa445a0ae88c3f0c3056f2380f/merged major:0 minor:704 fsType:overlay blockSize:0} overlay_0-706:{mountpoint:/var/lib/containers/storage/overlay/2e1021c0c2d22d94a615f01e6b1af2efd684c104f79b11549187e295fb4b5c7e/merged major:0 minor:706 fsType:overlay blockSize:0} overlay_0-714:{mountpoint:/var/lib/containers/storage/overlay/3497f413d50e8cc5414c3e39ccbca5f7bc16a7cbc6c511c1979175038a4a6bc8/merged major:0 minor:714 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/4cea209a04d5206321082ddf27c7d0676c011318051f68d5c4aa8a96f93a1ab4/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/c05ef88b7d969821840b85221425c698632426299f278fb8520c9044e18a6c99/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-792:{mountpoint:/var/lib/containers/storage/overlay/f79710142fc0bda8b0c46c96c8e59fdaac8abc1380787a4082515e48d5d463c9/merged major:0 minor:792 fsType:overlay blockSize:0} overlay_0-797:{mountpoint:/var/lib/containers/storage/overlay/03c748bc7ed03315e8e1a5f5942541b7c52c53964fa58588785b13299fed57ba/merged major:0 minor:797 fsType:overlay blockSize:0} overlay_0-799:{mountpoint:/var/lib/containers/storage/overlay/4c3842ce496038287524d8900e0f5bba26e220fd415f2e81ea9ea3cc8c6a2a8d/merged major:0 minor:799 fsType:overlay blockSize:0} overlay_0-801:{mountpoint:/var/lib/containers/storage/overlay/9bcaef39592910b7368fb4cbdcebaaf575f255b7f31bbfb6340dda1b85a1ca4a/merged major:0 minor:801 fsType:overlay blockSize:0} overlay_0-803:{mountpoint:/var/lib/containers/storage/overlay/8e97a43df48880a5ad49e38f92d4e473997d3220bfb792ab0222eebfc8a9a73a/merged major:0 minor:803 fsType:overlay blockSize:0} overlay_0-805:{mountpoint:/var/lib/containers/storage/overlay/fa1dd244dd0d0e2783b9cc22a6994e043e2f182e1739bc1365c042ad0c01b252/merged major:0 minor:805 fsType:overlay blockSize:0} overlay_0-807:{mountpoint:/var/lib/containers/storage/overlay/100111d7e37923be8dd7823f8a48050237e71316e2e9ca22260f0edb17ce81ff/merged major:0 minor:807 fsType:overlay blockSize:0} overlay_0-810:{mountpoint:/var/lib/containers/storage/overlay/f452939ff8d0b6044518a672a3f5a31b7ac518c6ea171960eee1b9c4272732aa/merged major:0 minor:810 fsType:overlay blockSize:0} overlay_0-816:{mountpoint:/var/lib/containers/storage/overlay/eb746721bbe56efb327ee21e31229417f7c92efd4eb6705c853eb2ff11bb6fba/merged major:0 minor:816 fsType:overlay blockSize:0} overlay_0-818:{mountpoint:/var/lib/containers/storage/overlay/72a291113106a6ac6cbdc9975911927221159a423ea23fadb23e084b4dfd0fee/merged major:0 minor:818 fsType:overlay blockSize:0} overlay_0-820:{mountpoint:/var/lib/containers/storage/overlay/ba1142066b136e65142730688d568559cf8a8c98b8319305e27668cebc58400f/merged major:0 minor:820 fsType:overlay blockSize:0} overlay_0-822:{mountpoint:/var/lib/containers/storage/overlay/83e44e0032ac213533ce6f4a6a9e8238b5ac1bf11e98752c90e45455da357519/merged major:0 minor:822 fsType:overlay blockSize:0} overlay_0-824:{mountpoint:/var/lib/containers/storage/overlay/41b948789d182211b9fbb3107db8770ab7bdd3c812e1c4cc54af6937620b87ae/merged major:0 minor:824 fsType:overlay blockSize:0} overlay_0-826:{mountpoint:/var/lib/containers/storage/overlay/fba5b8f16fd7e95af63f0d1f777482b5342376b6a535f7ed48ff068742c9adb6/merged major:0 minor:826 fsType:overlay blockSize:0} overlay_0-828:{mountpoint:/var/lib/containers/storage/overlay/540bd16c6e8825f0cf21dfd9d1372c69d703d1e8fff5b2b586d460bdf2d385da/merged major:0 minor:828 fsType:overlay blockSize:0} overlay_0-830:{mountpoint:/var/lib/containers/storage/overlay/9de4e924f02f2cd4cafdbb850514adb3375b64bfab327c1472c2a8648eb39035/merged major:0 minor:830 fsType:overlay blockSize:0} overlay_0-832:{mountpoint:/var/lib/containers/storage/overlay/2e66cd337b71de13ac72c6abf331179dbc4b7d78f0dc7f30e6979841d2c62154/merged major:0 minor:832 fsType:overlay blockSize:0} overlay_0-834:{mountpoint:/var/lib/containers/storage/overlay/21ef57c90e6e7d5019a969ab2bb0d2a2250d4e99dd9989c214fa9357f3214702/merged major:0 minor:834 fsType:overlay blockSize:0} overlay_0-836:{mountpoint:/var/lib/containers/storage/overlay/cef20f913e7ae9220e55be654179ba099bc517216dc70a587806f7449cbb5f42/merged major:0 minor:836 fsType:overlay blockSize:0} overlay_0-838:{mountpoint:/var/lib/containers/storage/overlay/b31bb4f29ac08a33f70340df1319306a431933dcc204f084bdcef2f75cb843aa/merged major:0 minor:838 fsType:overlay blockSize:0} overlay_0-840:{mountpoint:/var/lib/containers/storage/overlay/a454a19e38bd17e69d24cd9646a2e83e65c5fa1e09535f2a3a7606d77efb8bfe/merged major:0 minor:840 fsType:overlay blockSize:0} overlay_0-844:{mountpoint:/var/lib/containers/storage/overlay/47e8cb661a3f724f230774e3f9623f18db4e7d99730b4a76fef7fd330f0dd9f9/merged major:0 minor:844 fsType:overlay blockSize:0} overlay_0-860:{mountpoint:/var/lib/containers/storage/overlay/1ad55582028f50d3cef1605fd41777cc23e92da3c1175137d5831a268db785b8/merged major:0 minor:860 fsType:overlay blockSize:0} overlay_0-87:{mountpoint:/var/lib/containers/storage/overlay/c75e8f4cf6a56e9875e5c017edc7a9d149a3c45509a3c9e2f8c5e1467ef96cb1/merged major:0 minor:87 fsType:overlay blockSize:0} overlay_0-873:{mountpoint:/var/lib/containers/storage/overlay/05bbf1002ea44c48be6af76dab09b09f49b8d0763022c5b4651cf18b2e75bab0/merged major:0 minor:873 fsType:overlay blockSize:0} overlay_0-878:{mountpoint:/var/lib/containers/storage/overlay/2f48e2b8cdbf6becf82fa8bb2b6bfe16ab9dcd1390806525e4d1fb21379505c3/merged major:0 minor:878 fsType:overlay blockSize:0} overlay_0-880:{mountpoint:/var/lib/containers/storage/overlay/5bb50fb6d5475a2306aab75b973c0944709f2ad98a478742447c40098941f287/merged major:0 minor:880 fsType:overlay blockSize:0} overlay_0-882:{mountpoint:/var/lib/containers/storage/overlay/cd543bf55904e7cf36b4f866d526f251feaa266b2237dddb5c33f081836942ea/merged major:0 minor:882 fsType:overlay blockSize:0} overlay_0-883:{mountpoint:/var/lib/containers/storage/overlay/1c9787f0aa45f9ca478c4d9d53540cb44edb7440a1c1af244ba558784ea53735/merged major:0 minor:883 fsType:overlay blockSize:0} overlay_0-887:{mountpoint:/var/lib/containers/storage/overlay/541aff154062381000c2fe6f19562dd0b1abd3888b40e18552ba5723a2d3c002/merged major: Mar 18 09:03:32.600116 master-0 kubenswrapper[26053]: 0 minor:887 fsType:overlay blockSize:0} overlay_0-889:{mountpoint:/var/lib/containers/storage/overlay/70338cb620db3438e897fa362691b0ddfec92a1014cd3151c4e4c35d82f283d3/merged major:0 minor:889 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/519c0cfd533fed4bedb8cee3f2ef8ef8532c15ae87eeebcf288d4a03c379aa84/merged major:0 minor:89 fsType:overlay blockSize:0} overlay_0-894:{mountpoint:/var/lib/containers/storage/overlay/85c87f79d0bd02c15b58e6dd70b17d815601355316dc24d78d3e1b7e1dba4205/merged major:0 minor:894 fsType:overlay blockSize:0} overlay_0-90:{mountpoint:/var/lib/containers/storage/overlay/1d8c63dc89b8f0f464e44ffaeef0ffe900d09b92bab797494e596d6801fbd3b7/merged major:0 minor:90 fsType:overlay blockSize:0} overlay_0-902:{mountpoint:/var/lib/containers/storage/overlay/350cba09d9ed349f581213de044e680cce1d2bd7393166c040f60022d2208767/merged major:0 minor:902 fsType:overlay blockSize:0} overlay_0-906:{mountpoint:/var/lib/containers/storage/overlay/f761f78929d2c51bf30c6b712b5e029f25c0d8b3f326e1cae6ed38b3ec0becec/merged major:0 minor:906 fsType:overlay blockSize:0} overlay_0-908:{mountpoint:/var/lib/containers/storage/overlay/fcd3e94f27d0e4f40ad66c4a6310587ed3ee0d82b08efd6464aa9061285b9781/merged major:0 minor:908 fsType:overlay blockSize:0} overlay_0-917:{mountpoint:/var/lib/containers/storage/overlay/5e92b92185e0386fa0ee8435832292758d75abf3bd873b4cc5eaf511c0376fa9/merged major:0 minor:917 fsType:overlay blockSize:0} overlay_0-925:{mountpoint:/var/lib/containers/storage/overlay/1ebd556feb4594869b6ae923bbde47b6f79871db8a4ef6b44361af5256c96bb7/merged major:0 minor:925 fsType:overlay blockSize:0} overlay_0-929:{mountpoint:/var/lib/containers/storage/overlay/83e626d9fdf8c5ceb2afc87e4a82ec48f33c762eac0dbb2bc3fb895b22da492c/merged major:0 minor:929 fsType:overlay blockSize:0} overlay_0-933:{mountpoint:/var/lib/containers/storage/overlay/aa553ece5437772b9a3618f137ce48e718b99a19772ff139ab099e0f62fe6da6/merged major:0 minor:933 fsType:overlay blockSize:0} overlay_0-942:{mountpoint:/var/lib/containers/storage/overlay/512525d29786e02eb63f8326821695a4605c0a987f44c6695da0d35b7421a559/merged major:0 minor:942 fsType:overlay blockSize:0} overlay_0-951:{mountpoint:/var/lib/containers/storage/overlay/784252ade80e87d513406fbe65ee1c7875c91d59b85a64cb3b552b272f3b91d8/merged major:0 minor:951 fsType:overlay blockSize:0} overlay_0-955:{mountpoint:/var/lib/containers/storage/overlay/32336039223c82b8d76fb1ab1c06a3a5a9e7f2f89da729a87b4716ec782969bf/merged major:0 minor:955 fsType:overlay blockSize:0} overlay_0-97:{mountpoint:/var/lib/containers/storage/overlay/ccc7661eafa7676a4265a4e0ce4cfb40936a55cfc4ddf0d0e6841b02161259f6/merged major:0 minor:97 fsType:overlay blockSize:0} overlay_0-974:{mountpoint:/var/lib/containers/storage/overlay/f56fb9d34975cd15f14fad39d69ba880dc766b1a57acf17eeed73e5f418964e1/merged major:0 minor:974 fsType:overlay blockSize:0} overlay_0-976:{mountpoint:/var/lib/containers/storage/overlay/9116fbb8cd964190b0eebb4f2e8b6344230dd58ecd00eec345ef62de220409ba/merged major:0 minor:976 fsType:overlay blockSize:0} overlay_0-982:{mountpoint:/var/lib/containers/storage/overlay/415ac9260a06642746615edd73797f5fc27970740bd0814a034f9e2dee300c2c/merged major:0 minor:982 fsType:overlay blockSize:0} overlay_0-988:{mountpoint:/var/lib/containers/storage/overlay/2462bad8e12ab7519d9a8676599599a84b7711603c2c42f1e9bdc5634a1bd6b1/merged major:0 minor:988 fsType:overlay blockSize:0} overlay_0-990:{mountpoint:/var/lib/containers/storage/overlay/e182e01852bb75a7accce20b00a318af22b7eb3de752199d9fb1d915b32af8a6/merged major:0 minor:990 fsType:overlay blockSize:0} overlay_0-992:{mountpoint:/var/lib/containers/storage/overlay/6ff512092d66839cd5887fc6528f389d23199388d73351e3691f401429494c48/merged major:0 minor:992 fsType:overlay blockSize:0}] Mar 18 09:03:32.662674 master-0 kubenswrapper[26053]: I0318 09:03:32.660413 26053 manager.go:217] Machine: {Timestamp:2026-03-18 09:03:32.658268601 +0000 UTC m=+0.151620022 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:a182270b4b4e4574b525d56213aa67ea SystemUUID:a182270b-4b4e-4574-b525-d56213aa67ea BootID:c890c208-5a3a-4b66-9a9b-e57ae2c6aae9 Filesystems:[{Device:/var/lib/kubelet/pods/6c56e1ac-8752-4e46-8692-93716087f0e0/volumes/kubernetes.io~projected/kube-api-access-rppm6 DeviceMajor:0 DeviceMinor:236 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/74b42a82fad4fc08801bc253d1dad3a48f5984717f93c0a00de7af542db7236a/userdata/shm DeviceMajor:0 DeviceMinor:949 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c5f7a3fa8c5b36c0a6bb152021a58a30801cd5a1c1d2647cef87fa01a0bdd128/userdata/shm DeviceMajor:0 DeviceMinor:114 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-706 DeviceMajor:0 DeviceMinor:706 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-883 DeviceMajor:0 DeviceMinor:883 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-925 DeviceMajor:0 DeviceMinor:925 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1102 DeviceMajor:0 DeviceMinor:1102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-53 DeviceMajor:0 DeviceMinor:53 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/57affd8b-d1ce-40d2-b31e-7b18645ca7b6/volumes/kubernetes.io~projected/kube-api-access-fnzhn DeviceMajor:0 DeviceMinor:140 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/c00ee838-424f-482b-942f-08f0952a5ccd/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:638 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/e48101ca-f356-45e3-93d7-4e17b8d8066c/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:683 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-799 DeviceMajor:0 DeviceMinor:799 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-860 DeviceMajor:0 DeviceMinor:860 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b949a573f922a32d775357bfcb732ec6a7748990727195e8e189e898f3802768/userdata/shm DeviceMajor:0 DeviceMinor:55 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/eb8f3615-9e89-4b51-87a2-7d168c81adf3/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:735 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/cdf1c657-a9dc-455a-b2fd-27a518bc5199/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:967 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-459 DeviceMajor:0 DeviceMinor:459 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bf7a3329-a04c-4b58-9364-b907c00cbe08/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:489 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3a452f53888d80954ddda76e2511f1f532656825d47ec252e4f76b2a75b26a96/userdata/shm DeviceMajor:0 DeviceMinor:498 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/f918d08d-df7c-4e8d-85ba-1c92d766db16/volumes/kubernetes.io~projected/kube-api-access-l6p7s DeviceMajor:0 DeviceMinor:757 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/8683c8c6-3a77-4b46-8898-142f9781b49c/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:1015 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bf2e729c77c8dcc1816b63b2326e6f2b5171c3d35ed8802a8a640112eae85e62/userdata/shm DeviceMajor:0 DeviceMinor:481 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/1deb139f-1903-417e-835c-28abdd156cdb/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:490 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/a7cf2cff-ca67-4cc6-99e7-99478ab89af4/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:842 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8aef2deed01150bfe4043851c63a0e6b97fd934c62137327d4f1c10f4beb1f04/userdata/shm DeviceMajor:0 DeviceMinor:154 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/1c322813-b574-4b46-b760-208ccecd01a5/volumes/kubernetes.io~projected/kube-api-access-9fbs4 DeviceMajor:0 DeviceMinor:732 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/99b24b432d9d961efa29c66242b9310a2073ba8bdb85f3ff964081d7dab2d588/userdata/shm DeviceMajor:0 DeviceMinor:741 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-976 DeviceMajor:0 DeviceMinor:976 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-462 DeviceMajor:0 DeviceMinor:462 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/be2682e4-cb63-4102-a83e-ef28023e273a/volumes/kubernetes.io~projected/kube-api-access-nmztj DeviceMajor:0 DeviceMinor:232 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0f9ba06c-7a6b-4f46-a747-80b0a0b58600/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:257 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3ac5162bd81def353052ebf597421eb671cb88aec927ef74f518a70f421eb249/userdata/shm DeviceMajor:0 DeviceMinor:502 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/08f21128e07d665939c2d0c41577d2352ec3b22e6dbd82f3846839a110c79e2d/userdata/shm DeviceMajor:0 DeviceMinor:1017 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fce4e249fbb76d05fe14f32edfd62297db6230d70d6e19d6ad7a50ec7970b217/userdata/shm DeviceMajor:0 DeviceMinor:775 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1034 DeviceMajor:0 DeviceMinor:1034 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3827efb6815dbb16a6fe46aec77900fafde56c2e8c5cdf8a95de12d8f38843f8/userdata/shm DeviceMajor:0 DeviceMinor:275 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/2b59dbf5-0a61-4981-aed3-e73550615c4a/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1064 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0f6a7f55-84bd-4ea5-8248-4cb565904c3b/volumes/kubernetes.io~projected/kube-api-access-lnfwv DeviceMajor:0 DeviceMinor:235 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-448 DeviceMajor:0 DeviceMinor:448 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f2fcd92f-0a58-4c87-8213-715453486aca/volumes/kubernetes.io~projected/kube-api-access-zwnvl DeviceMajor:0 DeviceMinor:738 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-840 DeviceMajor:0 DeviceMinor:840 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/93cb5ef1-e8f1-4d11-8c93-1abf24626176/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:966 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1026 DeviceMajor:0 DeviceMinor:1026 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/4192ea44-a38c-4b70-93c3-8070da2ffe2f/volumes/kubernetes.io~projected/kube-api-access-gp84d DeviceMajor:0 DeviceMinor:247 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-484 DeviceMajor:0 DeviceMinor:484 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-547 DeviceMajor:0 DeviceMinor:547 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3898c28b-69b0-46af-b085-37e12d7d80ba/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:755 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/81206bcb1647b4c08efbfd17569fc7b2653680bdd8064b363b1831673479ee1e/userdata/shm DeviceMajor:0 DeviceMinor:148 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/bf7a3329-a04c-4b58-9364-b907c00cbe08/volumes/kubernetes.io~projected/kube-api-access-2plvj DeviceMajor:0 DeviceMinor:237 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-450 DeviceMajor:0 DeviceMinor:450 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-407 DeviceMajor:0 DeviceMinor:407 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-873 DeviceMajor:0 DeviceMinor:873 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/15798f4d-8bcc-4e24-bb18-8dff1f4edf59/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1065 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/f918d08d-df7c-4e8d-85ba-1c92d766db16/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:747 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-409 DeviceMajor:0 DeviceMinor:409 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fdb52116-9c55-4464-99c8-fc2e4559996b/volumes/kubernetes.io~projected/kube-api-access-xzrxv DeviceMajor:0 DeviceMinor:739 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/16a1ea739ab8f65d8a4f8df45a743988b1ba71abf3b8764f36d6dbcba21ceced/userdata/shm DeviceMajor:0 DeviceMinor:762 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1092 DeviceMajor:0 DeviceMinor:1092 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/81eefe1b-f683-4740-8fb0-0a5050f9b4a4/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/9cc640bf-cb5f-4493-b47b-6ea6f524525e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:549 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-797 DeviceMajor:0 DeviceMinor:797 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-320 DeviceMajor:0 DeviceMinor:320 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b/volumes/kubernetes.io~projected/kube-api-access-wxgx6 DeviceMajor:0 DeviceMinor:94 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8683c8c6-3a77-4b46-8898-142f9781b49c/volumes/kubernetes.io~projected/kube-api-access-g42f4 DeviceMajor:0 DeviceMinor:1016 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-283 DeviceMajor:0 DeviceMinor:283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bf5fd4cc-959e-4878-82e9-b0f90dba6553/volumes/kubernetes.io~projected/kube-api-access-r4jq4 DeviceMajor:0 DeviceMinor:734 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/7b7ac7ef-060f-45d2-8988-006d45402e00/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:456 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0736cc0a2848f729264829f385cc50d1c800409917495f6cb40f4213a06e4f6a/userdata/shm DeviceMajor:0 DeviceMinor:1158 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/936c1c5ea7d8a039544de89341bf00b6792ab44d21cf236ad59bfd20a0a51ad9/userdata/shm DeviceMajor:0 DeviceMinor:788 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/411d544f-e105-44f0-927a-f61406b3f070/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:438 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/1deb139f-1903-417e-835c-28abdd156cdb/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:488 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3ec66dd169d08be1b920bf1865303a7a46910236130e7f06946e53376569a93c/userdata/shm DeviceMajor:0 DeviceMinor:495 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-67 DeviceMajor:0 DeviceMinor:67 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/64e6daddf9e1c75183bc383ad71913a134e81a48cb25bcfeb9ca74c12a1be908/userdata/shm DeviceMajor:0 DeviceMinor:155 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/14489ef7-8df3-4a3b-a137-3a78e89d425b/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:769 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-988 DeviceMajor:0 DeviceMinor:988 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-388 DeviceMajor:0 DeviceMinor:388 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-878 DeviceMajor:0 DeviceMinor:878 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7cac1300-44c1-4a7d-8d14-efa9702ad9df/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-686 DeviceMajor:0 DeviceMinor:686 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/25781967-12ce-490e-94aa-9b9722f495da/volumes/kubernetes.io~projected/kube-api-access-z5cgw DeviceMajor:0 DeviceMinor:754 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-805 DeviceMajor:0 DeviceMinor:805 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/93cb5ef1-e8f1-4d11-8c93-1abf24626176/volumes/kubernetes.io~projected/kube-api-access-xt64s DeviceMajor:0 DeviceMinor:971 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cda44dd8-895a-4eab-bedc-83f38efa2482/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:579 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1307b515e04cb833c9f1e9d6e14d178f8505b7f9e092ede28bdd570b3c7ab5f2/userdata/shm DeviceMajor:0 DeviceMinor:780 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1134 DeviceMajor:0 DeviceMinor:1134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-413 DeviceMajor:0 DeviceMinor:413 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-461 DeviceMajor:0 DeviceMinor:461 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c3c61954e21feda03f422b20f9d63bd6912c405f9f67a85dab1db1f6274782fd/userdata/shm DeviceMajor:0 DeviceMinor:968 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c6176328-5931-405b-8519-8e4bc83bedfb/volumes/kubernetes.io~projected/kube-api-access-5zx99 DeviceMajor:0 DeviceMinor:326 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/2a864188-ada6-4ec2-bf9f-72dab210f0ce/volumes/kubernetes.io~projected/kube-api-access-csfl2 DeviceMajor:0 DeviceMinor:761 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-887 DeviceMajor:0 DeviceMinor:887 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ff2fe1000ac68ef028cd1879d1c2cf197302bdbc2feff6610b1fe10df6c0bb6d/userdata/shm DeviceMajor:0 DeviceMinor:51 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-446 DeviceMajor:0 DeviceMinor:446 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-583 DeviceMajor:0 DeviceMinor:583 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aea03d504ef2f838af66f123ab31966d30cbe948b0b47dc0feb84acc63bbf656/userdata/shm DeviceMajor:0 DeviceMinor:767 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/87381a51-96e6-4e86-bdae-c8ac3fc7a039/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1127 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-315 DeviceMajor:0 DeviceMinor:315 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-581 DeviceMajor:0 DeviceMinor:581 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-620 DeviceMajor:0 DeviceMinor:620 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7d99052b3134ac6e3a86c06ba3a47b78c6cc784b483d36aa7d9f44db2d29bc24/userdata/shm DeviceMajor:0 DeviceMinor:784 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:221 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/87381a51-96e6-4e86-bdae-c8ac3fc7a039/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1123 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/34190ff24c5d64d3f04ee73c9371b2fe699e4dc756931f93643f7e454d205294/userdata/shm DeviceMajor:0 DeviceMinor:518 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-291 DeviceMajor:0 DeviceMinor:291 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ba34b3933aeb088c8a44bf92497699577ba58b333ed292879af37568495f962f/userdata/shm DeviceMajor:0 DeviceMinor:276 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/bef948b9-eef4-404b-9b49-6e4a2ceea73b/volumes/kubernetes.io~projected/kube-api-access-mnn98 DeviceMajor:0 DeviceMinor:756 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6f7fc65d624ce13d22d22ba96da2bcd01a27c00fbe5c72b2803f8ccbc5a1dae8/userdata/shm DeviceMajor:0 DeviceMinor:64 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d08575c558c437f11dbc3ff61697000e9d98f0ee2f13a6f88c21e791f90d00ab/userdata/shm DeviceMajor:0 DeviceMinor:420 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/7b7ac7ef-060f-45d2-8988-006d45402e00/volumes/kubernetes.io~projected/kube-api-access-qkx4s DeviceMajor:0 DeviceMinor:503 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/e48101ca-f356-45e3-93d7-4e17b8d8066c/volumes/kubernetes.io~projected/kube-api-access-47cpd DeviceMajor:0 DeviceMinor:123 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/8683c8c6-3a77-4b46-8898-142f9781b49c/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:980 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9d66a0e1a66af3412b18eaf6bb7d49b378aad4df6e4a3ab8703f0492b2a8b438/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-281 DeviceMajor:0 DeviceMinor:281 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4e919445-81d0-4663-8941-f596d8121305/volumes/kubernetes.io~projected/kube-api-access-kwp9m DeviceMajor:0 DeviceMinor:415 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e917de8a6a8f9b1b1c6c325604e10e91f09c06b26f45f002fa62fa96185aa27a/userdata/shm DeviceMajor:0 DeviceMinor:475 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/7cac1300-44c1-4a7d-8d14-efa9702ad9df/volumes/kubernetes.io~projected/kube-api-access-7dn5k DeviceMajor:0 DeviceMinor:125 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/111cad7658297e20b09bb8a0322469d96e6944a07fa207a070a18f99b4bbfc85/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-632 DeviceMajor:0 DeviceMinor:632 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-42 DeviceMajor:0 DeviceMinor:42 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2d0da6e3-3887-4361-8eae-e7447f9ff72c/volumes/kubernetes.io~projected/kube-api-access-xkw45 DeviceMajor:0 DeviceMinor:223 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/bb6ef4c4-bff3-4559-8e42-582bbd668b7c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:244 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-604 DeviceMajor:0 DeviceMinor:604 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fcb70fadbcfc61d48c1e2b4ec06918e00580889e40004adc7bcefac11baf1ceb/userdata/shm DeviceMajor:0 DeviceMinor:109 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/25781967-12ce-490e-94aa-9b9722f495da/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:744 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0f6a7f55-84bd-4ea5-8248-4cb565904c3b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-610 DeviceMajor:0 DeviceMinor:610 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-992 DeviceMajor:0 DeviceMinor:992 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-532 DeviceMajor:0 DeviceMinor:532 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/15b6612f-3a51-4a67-a566-8c520f85c6c2/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:483 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-343 DeviceMajor:0 DeviceMinor:343 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/62a17de80f64346bbd0c33255e42240333a632bbd8223bc931f3c908f3c47ad2/userdata/shm DeviceMajor:0 DeviceMinor:774 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-803 DeviceMajor:0 DeviceMinor:803 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-951 DeviceMajor:0 DeviceMinor:951 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/17b1447b-1659-405b-81e0-21f0cf3e7a2c/volumes/kubernetes.io~projected/kube-api-access-rd8zs DeviceMajor:0 DeviceMinor:970 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1054 DeviceMajor:0 DeviceMinor:1054 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8dacdedc-c6ad-40d4-afdc-59a31be417fe/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-828 DeviceMajor:0 DeviceMinor:828 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-838 DeviceMajor:0 DeviceMinor:838 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8dacdedc-c6ad-40d4-afdc-59a31be417fe/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:166 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-199 DeviceMajor:0 DeviceMinor:199 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/94e2a8f0-2c2e-43da-9fa9-69edfcd77830/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:374 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-428 DeviceMajor:0 DeviceMinor:428 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f81c411903140f1ed67af182269cee687c3cf33776c637366fe64b8e9cc8279e/userdata/shm DeviceMajor:0 DeviceMinor:258 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-165 DeviceMajor:0 DeviceMinor:165 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/25198ccffb73a61a0d44324871a4bf2386567e2212f2fa517102359c9971071f/userdata/shm DeviceMajor:0 DeviceMinor:779 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-477 DeviceMajor:0 DeviceMinor:477 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5f827195-f68d-4bd2-865b-a1f041a5c73e/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/0f9ba06c-7a6b-4f46-a747-80b0a0b58600/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:245 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2b116d558e216a649546918f836612a6ac48d94d4e8f2cb72966b98c7cf4e449/userdata/shm DeviceMajor:0 DeviceMinor:397 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/94e2a8f0-2c2e-43da-9fa9-69edfcd77830/volumes/kubernetes.io~projected/kube-api-access-mr9zx DeviceMajor:0 DeviceMinor:737 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/78e813f78215ce3e16f2984fa206660096f7ce143773316d81ca9ea51d037b30/userdata/shm DeviceMajor:0 DeviceMinor:520 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/be2682e4-cb63-4102-a83e-ef28023e273a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-655 DeviceMajor:0 DeviceMinor:655 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-826 DeviceMajor:0 DeviceMinor:826 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ca9d4694-8675-47c5-819f-89bba9dcdc0f/volumes/kubernetes.io~projected/kube-api-access-rx9dd DeviceMajor:0 DeviceMinor:220 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/bb6ef4c4-bff3-4559-8e42-582bbd668b7c/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:246 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/680006ef-a955-491e-b6a3-1ca7fcc20165/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:390 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d7205eeb-912b-4c31-b08f-ed0b2a1319aa/volumes/kubernetes.io~projected/kube-api-access-ddsnb DeviceMajor:0 DeviceMinor:948 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-929 DeviceMajor:0 DeviceMinor:929 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ad6bcec915e6b33c36d0b67453718438f61e0a96034cf76c3052d8a1d9e3df06/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/600c92a1-56c5-497b-a8f0-746830f4180e/volumes/kubernetes.io~projected/kube-api-access-m9mh7 DeviceMajor:0 DeviceMinor:260 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d6446762bc6a0b43e14b052b6b1fde0273d338b8feb7a11225c2093e688292fc/userdata/shm DeviceMajor:0 DeviceMinor:598 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c5b1f5eb93f4781ad7eb457481d37161ebc8d0cd97fd5fc8d694689aa1b5790c/userdata/shm DeviceMajor:0 DeviceMinor:606 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1115 DeviceMajor:0 DeviceMinor:1115 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c00ee838-424f-482b-942f-08f0952a5ccd/volumes/kubernetes.io~projected/kube-api-access-9w4w9 DeviceMajor:0 DeviceMinor:231 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/cdcd27a4-6d46-47af-a14a-65f6501c10f0/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:749 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-90 DeviceMajor:0 DeviceMinor:90 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/50a2c23f-26af-4c7f-8ea6-996bcfe173d0/volumes/kubernetes.io~projected/kube-api-access-2jcqf DeviceMajor:0 DeviceMinor:751 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4a9c798432c4910d57904b2bd4d441bf0df0839546f138cc70e48ec5d9012c6a/userdata/shm DeviceMajor:0 DeviceMinor:255 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7614f67ab42a92a0cedef41e5a4853cd6e5b7388a0d9d5d3571435c2df397b78/userdata/shm DeviceMajor:0 DeviceMinor:905 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fd3388055ed633bef8e022a8b09742a25d6085b3bb671bd2342375ed6f18da63/userdata/shm DeviceMajor:0 DeviceMinor:278 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/57affd8b-d1ce-40d2-b31e-7b18645ca7b6/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:141 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-492 DeviceMajor:0 DeviceMinor:492 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-818 DeviceMajor:0 DeviceMinor:818 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b5d41e3233b622c13ba073282af1bdf3d224e46b75a003c04d3f6b78e4a19cd2/userdata/shm DeviceMajor:0 DeviceMinor:308 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/93cb5ef1-e8f1-4d11-8c93-1abf24626176/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:961 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-880 DeviceMajor:0 DeviceMinor:880 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8635320a4b36d9fe143361fc99701a8c79f38835939cb2e0d3b2cf8ebf88349b/userdata/shm DeviceMajor:0 DeviceMinor:271 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/de5504f4eb957b55e61d3335016f112615d1ef2e199a2abbfb8d8f21cdee899c/userdata/shm DeviceMajor:0 DeviceMinor:444 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9fe02104a8ebb638006892092dba78285ba64eb0d3e1c75a7de249822d587f12/userdata/shm DeviceMajor:0 DeviceMinor:773 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-974 DeviceMajor:0 DeviceMinor:974 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-653 DeviceMajor:0 DeviceMinor:653 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1043 DeviceMajor:0 DeviceMinor:1043 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-50 DeviceMajor:0 DeviceMinor:50 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-403 DeviceMajor:0 DeviceMinor:403 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-889 DeviceMajor:0 DeviceMinor:889 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-162 DeviceMajor:0 DeviceMinor:162 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b988232227aa085a178c31ee083231aab09e2347dc1af469f15feb00c41b0d1d/userdata/shm DeviceMajor:0 DeviceMinor:263 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-539 DeviceMajor:0 DeviceMinor:539 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/93cb5ef1-e8f1-4d11-8c93-1abf24626176/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:965 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1141 DeviceMajor:0 DeviceMinor:1141 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/50a2c23f-26af-4c7f-8ea6-996bcfe173d0/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:743 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-822 DeviceMajor:0 DeviceMinor:822 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-307 DeviceMajor:0 DeviceMinor:307 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-335 DeviceMajor:0 DeviceMinor:335 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-402 DeviceMajor:0 DeviceMinor:402 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/95143c61-6f91-4cd4-9411-31c2fb75d4d0/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:228 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/b2588f5c-327c-49cc-8cfb-0cce1ad758d5/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:596 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:681 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/09269324-c908-474d-818f-5cd49406f1e2/volumes/kubernetes.io~projected/kube-api-access-94zpt DeviceMajor:0 DeviceMinor:240 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/81eefe1b-f683-4740-8fb0-0a5050f9b4a4/volumes/kubernetes.io~projected/kube-api-access-qkkcv DeviceMajor:0 DeviceMinor:249 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1e613a3e031cd6ea2569b0de90a9eb4c58efa7686815ccbe34135809d0dec254/userdata/shm DeviceMajor:0 DeviceMinor:776 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-49 DeviceMajor:0 DeviceMinor:49 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-411 DeviceMajor:0 DeviceMinor:411 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-628 DeviceMajor:0 DeviceMinor:628 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-623 DeviceMajor:0 DeviceMinor:623 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c445746454631d8ce061d0857763b308446517ac6a8ca09e1933cec8fcfb6a97/userdata/shm DeviceMajor:0 DeviceMinor:238 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/ca9d4694-8675-47c5-819f-89bba9dcdc0f/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:679 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fdd2f1fd-1a94-4f4e-a275-b075f432f763/volumes/kubernetes.io~projected/kube-api-access-fqfdm DeviceMajor:0 DeviceMinor:118 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/09269324-c908-474d-818f-5cd49406f1e2/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:682 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-816 DeviceMajor:0 DeviceMinor:816 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/14489ef7-8df3-4a3b-a137-3a78e89d425b/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:768 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-372 DeviceMajor:0 DeviceMinor:372 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a1f2b373-0c85-4028-9089-9e9dff5d37b5/volumes/kubernetes.io~projected/kube-api-access-lczj8 DeviceMajor:0 DeviceMinor:480 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/bef948b9-eef4-404b-9b49-6e4a2ceea73b/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:745 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/bb6ef4c4-bff3-4559-8e42-582bbd668b7c/volumes/kubernetes.io~projected/kube-api-access-f2mj5 DeviceMajor:0 DeviceMinor:252 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/680006ef-a955-491e-b6a3-1ca7fcc20165/volumes/kubernetes.io~projected/kube-api-access-kkfms DeviceMajor:0 DeviceMinor:395 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/16c8b28b1f6483c7c92765f4231253e359cc1215e5ae5f3124d625cfaec91b4d/userdata/shm DeviceMajor:0 DeviceMinor:765 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/15798f4d-8bcc-4e24-bb18-8dff1f4edf59/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:1070 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-534 DeviceMajor:0 DeviceMinor:534 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-902 DeviceMajor:0 DeviceMinor:902 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-590 DeviceMajor:0 DeviceMinor:590 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-612 DeviceMajor:0 DeviceMinor:612 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-673 DeviceMajor:0 DeviceMinor:673 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-316 DeviceMajor:0 DeviceMinor:316 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-97 DeviceMajor:0 DeviceMinor:97 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-424 DeviceMajor:0 DeviceMinor:424 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a1f2b373-0c85-4028-9089-9e9dff5d37b5/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:479 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/a7cf2cff-ca67-4cc6-99e7-99478ab89af4/volumes/kubernetes.io~projected/kube-api-access-vhdc2 DeviceMajor:0 DeviceMinor:852 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-512 DeviceMajor:0 DeviceMinor:512 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-504 DeviceMajor:0 DeviceMinor:504 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-332 DeviceMajor:0 DeviceMinor:332 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac/volumes/kubernetes.io~projected/kube-api-access-77sfj DeviceMajor:0 DeviceMinor:234 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-267 DeviceMajor:0 DeviceMinor:267 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1104 DeviceMajor:0 DeviceMinor:1104 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1132 DeviceMajor:0 DeviceMinor:1132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-312 DeviceMajor:0 DeviceMinor:312 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/411d544f-e105-44f0-927a-f61406b3f070/volumes/kubernetes.io~projected/kube-api-access-t4l97 DeviceMajor:0 DeviceMinor:439 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-566 DeviceMajor:0 DeviceMinor:566 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-982 DeviceMajor:0 DeviceMinor:982 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-894 DeviceMajor:0 DeviceMinor:894 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/13a068e44f036eb5ea2827a8a27172c655290a87fa0428a7b71b67b8505f2fbb/userdata/shm DeviceMajor:0 DeviceMinor:92 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/e86268c9-7a83-4ccb-979a-feff00cb4b3e/volumes/kubernetes.io~projected/kube-api-access-ptdsp DeviceMajor:0 DeviceMinor:219 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2268116be19023b1c8385358efae4da2f05525a23575585605fbe5052dde322b/userdata/shm DeviceMajor:0 DeviceMinor:261 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-836 DeviceMajor:0 DeviceMinor:836 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:458 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/345478a9f31c33009fc0312365cde9a2e83761bfa6df9d1f8521197057d19304/userdata/shm DeviceMajor:0 DeviceMinor:494 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-431 DeviceMajor:0 DeviceMinor:431 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-434 DeviceMajor:0 DeviceMinor:434 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fdb52116-9c55-4464-99c8-fc2e4559996b/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:720 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/65cff83a-8d8f-4e4f-96ef-99941c29ba53/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:243 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-126 DeviceMajor:0 DeviceMinor:126 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-510 DeviceMajor:0 DeviceMinor:510 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/800297fe-77fd-4f58-ade2-32a147cd7d5c/volumes/kubernetes.io~projected/kube-api-access-tw5zj DeviceMajor:0 DeviceMinor:423 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/411d544f-e105-44f0-927a-f61406b3f070/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:443 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-341 DeviceMajor:0 DeviceMinor:341 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/15b6612f-3a51-4a67-a566-8c520f85c6c2/volumes/kubernetes.io~projected/kube-api-access-dcfrf DeviceMajor:0 DeviceMinor:487 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/4192ea44-a38c-4b70-93c3-8070da2ffe2f/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:493 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-933 DeviceMajor:0 DeviceMinor:933 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1025 DeviceMajor:0 DeviceMinor:1025 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1097 DeviceMajor:0 DeviceMinor:1097 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b2588f5c-327c-49cc-8cfb-0cce1ad758d5/volumes/kubernetes.io~projected/kube-api-access-9mkcq DeviceMajor:0 DeviceMinor:597 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1082 DeviceMajor:0 DeviceMinor:1082 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2d0da6e3-3887-4361-8eae-e7447f9ff72c/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:650 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/kubelet/pods/bf7a3329-a04c-4b58-9364-b907c00cbe08/volumes/kubernetes.io~projec Mar 18 09:03:32.663239 master-0 kubenswrapper[26053]: ted/bound-sa-token DeviceMajor:0 DeviceMinor:215 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1fc4aaf36f3d357358d477445a6e46751b37db5a1b5d446f108b4d2b190e035d/userdata/shm DeviceMajor:0 DeviceMinor:113 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-792 DeviceMajor:0 DeviceMinor:792 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a1f2b373-0c85-4028-9089-9e9dff5d37b5/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:478 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/eb8f3615-9e89-4b51-87a2-7d168c81adf3/volumes/kubernetes.io~projected/kube-api-access-mj95l DeviceMajor:0 DeviceMinor:736 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/2a864188-ada6-4ec2-bf9f-72dab210f0ce/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:750 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/14489ef7-8df3-4a3b-a137-3a78e89d425b/volumes/kubernetes.io~projected/kube-api-access-n76wp DeviceMajor:0 DeviceMinor:770 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-882 DeviceMajor:0 DeviceMinor:882 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ea6882d5a36974530033e9c50aa841cb8b79a3300b3854f2b7d0678f67c4f1bf/userdata/shm DeviceMajor:0 DeviceMinor:63 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-359 DeviceMajor:0 DeviceMinor:359 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-844 DeviceMajor:0 DeviceMinor:844 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-325 DeviceMajor:0 DeviceMinor:325 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-87 DeviceMajor:0 DeviceMinor:87 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/15b6612f-3a51-4a67-a566-8c520f85c6c2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:524 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-704 DeviceMajor:0 DeviceMinor:704 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1084 DeviceMajor:0 DeviceMinor:1084 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1086 DeviceMajor:0 DeviceMinor:1086 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/af1fbcf2-d4de-4015-89fc-2565e855a04d/volumes/kubernetes.io~projected/kube-api-access-r5svd DeviceMajor:0 DeviceMinor:105 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5185a35bdc4ad1949570c4b3508eb6c84e58ffd468abe9bcc3bb2a0cb406ece2/userdata/shm DeviceMajor:0 DeviceMinor:527 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/50a2c23f-26af-4c7f-8ea6-996bcfe173d0/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:746 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/6c56e1ac-8752-4e46-8692-93716087f0e0/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:225 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/9cc640bf-cb5f-4493-b47b-6ea6f524525e/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:551 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/a0cd1cf7-be6f-4baf-8761-69c693476de9/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:748 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1080 DeviceMajor:0 DeviceMinor:1080 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e86268c9-7a83-4ccb-979a-feff00cb4b3e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/f198f770-5483-4499-abb6-06026f2c6b37/volumes/kubernetes.io~projected/kube-api-access-sk4w7 DeviceMajor:0 DeviceMinor:303 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-432 DeviceMajor:0 DeviceMinor:432 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-714 DeviceMajor:0 DeviceMinor:714 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-908 DeviceMajor:0 DeviceMinor:908 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-643 DeviceMajor:0 DeviceMinor:643 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/87381a51-96e6-4e86-bdae-c8ac3fc7a039/volumes/kubernetes.io~projected/kube-api-access-brzfx DeviceMajor:0 DeviceMinor:1129 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/6e869b45-8ca6-485f-8b6f-b2fad3b02efe/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:455 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bfd8810c464d77aec01f793c5157b8c1f1263372b617d164c28d17b0fec09dfe/userdata/shm DeviceMajor:0 DeviceMinor:82 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/8dacdedc-c6ad-40d4-afdc-59a31be417fe/volumes/kubernetes.io~projected/kube-api-access-g97kq DeviceMajor:0 DeviceMinor:168 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3bf63c21f45da93caf06a2a338ffeb21874020b8683b0b12c95244b028fbf72a/userdata/shm DeviceMajor:0 DeviceMinor:327 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/a0cd1cf7-be6f-4baf-8761-69c693476de9/volumes/kubernetes.io~projected/kube-api-access-2ggjn DeviceMajor:0 DeviceMinor:759 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/89bd968ec5efc46c09a448832705d02b17ad02bc6a428167a08a2238bdb031ed/userdata/shm DeviceMajor:0 DeviceMinor:760 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ca6a0275fcdb4cece62e11057aa43e164472b8187f168d1b56f7436a566a153a/userdata/shm DeviceMajor:0 DeviceMinor:785 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/d7205eeb-912b-4c31-b08f-ed0b2a1319aa/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:944 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/f6833a48-fccb-42bd-ac90-29f08d5bf7e8/volumes/kubernetes.io~projected/kube-api-access-257nx DeviceMajor:0 DeviceMinor:230 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/57683f550936db1e32a4ce9e0772053116a76decd109678101700be85f0fac15/userdata/shm DeviceMajor:0 DeviceMinor:241 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-310 DeviceMajor:0 DeviceMinor:310 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-671 DeviceMajor:0 DeviceMinor:671 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-364 DeviceMajor:0 DeviceMinor:364 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/kubelet/pods/3898c28b-69b0-46af-b085-37e12d7d80ba/volumes/kubernetes.io~projected/kube-api-access-z98qs DeviceMajor:0 DeviceMinor:758 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-820 DeviceMajor:0 DeviceMinor:820 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c44219a166b17d244712a88198d9aa0d215e4b99c0debae8766fc702ed86eb66/userdata/shm DeviceMajor:0 DeviceMinor:1130 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-138 DeviceMajor:0 DeviceMinor:138 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-608 DeviceMajor:0 DeviceMinor:608 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/599418d3-6afa-46ab-9afa-659134f7ac94/volumes/kubernetes.io~projected/kube-api-access-774fx DeviceMajor:0 DeviceMinor:1068 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-101 DeviceMajor:0 DeviceMinor:101 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1deb139f-1903-417e-835c-28abdd156cdb/volumes/kubernetes.io~projected/kube-api-access-dkmb4 DeviceMajor:0 DeviceMinor:248 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-348 DeviceMajor:0 DeviceMinor:348 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/15798f4d-8bcc-4e24-bb18-8dff1f4edf59/volumes/kubernetes.io~projected/kube-api-access-m2mwd DeviceMajor:0 DeviceMinor:1067 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1076 DeviceMajor:0 DeviceMinor:1076 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c5c995cf-40a0-4cd6-87fa-96a522f7bc57/volumes/kubernetes.io~projected/kube-api-access-rm2rc DeviceMajor:0 DeviceMinor:229 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-641 DeviceMajor:0 DeviceMinor:641 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-659 DeviceMajor:0 DeviceMinor:659 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e2af879e-1465-40bf-bf72-30c7e89386a3/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:1143 Capacity:200003584 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-506 DeviceMajor:0 DeviceMinor:506 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-634 DeviceMajor:0 DeviceMinor:634 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a4cdf17679fe34b2ebe526ed953d298c257540b9e977b6d7801fbe8541796904/userdata/shm DeviceMajor:0 DeviceMinor:1074 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1113 DeviceMajor:0 DeviceMinor:1113 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6e869b45-8ca6-485f-8b6f-b2fad3b02efe/volumes/kubernetes.io~projected/kube-api-access-xjv4l DeviceMajor:0 DeviceMinor:517 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/65cff83a-8d8f-4e4f-96ef-99941c29ba53/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:222 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/cda44dd8-895a-4eab-bedc-83f38efa2482/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:571 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/f6833a48-fccb-42bd-ac90-29f08d5bf7e8/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:680 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-657 DeviceMajor:0 DeviceMinor:657 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-942 DeviceMajor:0 DeviceMinor:942 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd/volumes/kubernetes.io~projected/kube-api-access-j5nwv DeviceMajor:0 DeviceMinor:457 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-400 DeviceMajor:0 DeviceMinor:400 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1162 DeviceMajor:0 DeviceMinor:1162 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0dc14cc88891929c02d96732c893456d82425d1db68dfef9ae085c39e17cfc21/userdata/shm DeviceMajor:0 DeviceMinor:564 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/eb8f3615-9e89-4b51-87a2-7d168c81adf3/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:733 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/e88b021c-c810-4a68-aa48-d8666b52330e/volumes/kubernetes.io~projected/kube-api-access-k22wv DeviceMajor:0 DeviceMinor:740 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7f67914817f6c7225b0d1e0411ea65a3f10d85891389f6bc9dcac2e2540a11b6/userdata/shm DeviceMajor:0 DeviceMinor:322 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-58 DeviceMajor:0 DeviceMinor:58 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/30c4f18dcbcc9f18a43ee88da7092e594b453df2ae8b1fce02caf6e61a63685f/userdata/shm DeviceMajor:0 DeviceMinor:112 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-801 DeviceMajor:0 DeviceMinor:801 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-272 DeviceMajor:0 DeviceMinor:272 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-345 DeviceMajor:0 DeviceMinor:345 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1004 DeviceMajor:0 DeviceMinor:1004 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8e583348603a749d4e556bba036d041822b1ae41ee887fca821dc33edae65947/userdata/shm DeviceMajor:0 DeviceMinor:250 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-384 DeviceMajor:0 DeviceMinor:384 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-990 DeviceMajor:0 DeviceMinor:990 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-585 DeviceMajor:0 DeviceMinor:585 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d6ccfac081e99c6c412564f51ffac7d61d3130a5f00a98585c4f3e1f5ce5443d/userdata/shm DeviceMajor:0 DeviceMinor:771 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-824 DeviceMajor:0 DeviceMinor:824 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-906 DeviceMajor:0 DeviceMinor:906 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/87381a51-96e6-4e86-bdae-c8ac3fc7a039/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:1128 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3b274035f2ac7d46626545fefa2691ceffb107580cf6cf569c0be6a2b76a628f/userdata/shm DeviceMajor:0 DeviceMinor:426 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-810 DeviceMajor:0 DeviceMinor:810 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-955 DeviceMajor:0 DeviceMinor:955 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:224 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-347 DeviceMajor:0 DeviceMinor:347 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-396 DeviceMajor:0 DeviceMinor:396 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-103 DeviceMajor:0 DeviceMinor:103 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cda44dd8-895a-4eab-bedc-83f38efa2482/volumes/kubernetes.io~projected/kube-api-access-bxshz DeviceMajor:0 DeviceMinor:580 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b72ac994264149152fe27ab0a6c3a137789afbe22f9ace579dcf4e093554cfc8/userdata/shm DeviceMajor:0 DeviceMinor:794 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c39b790e4f0dba710e842c418340b16d46173e0451560b3e7fe743c5f356666c/userdata/shm DeviceMajor:0 DeviceMinor:853 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-917 DeviceMajor:0 DeviceMinor:917 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2b59dbf5-0a61-4981-aed3-e73550615c4a/volumes/kubernetes.io~projected/kube-api-access-nqgbr DeviceMajor:0 DeviceMinor:1066 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a31ec04000ce15454ff93da1265406bc1c4ac93430867ce448a6dc1acd1c1097/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2078b34d8e519a78ac9e8ea0c87c5a7d54f6ce3303c5c0ec38a03f5d0d12a9d1/userdata/shm DeviceMajor:0 DeviceMinor:99 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/6c56e1ac-8752-4e46-8692-93716087f0e0/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:491 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b7990ab48fdb41a5eca1f84526ed3e4682864205c2abfda2c698a85c11f23f89/userdata/shm DeviceMajor:0 DeviceMinor:781 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1160 DeviceMajor:0 DeviceMinor:1160 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2b59dbf5-0a61-4981-aed3-e73550615c4a/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:1069 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-633 DeviceMajor:0 DeviceMinor:633 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/995ec82c-b593-416a-9287-6020a484855c/volumes/kubernetes.io~projected/kube-api-access-4q4k8 DeviceMajor:0 DeviceMinor:752 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-832 DeviceMajor:0 DeviceMinor:832 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f98590df5fb100e44d681ee1b32da7aae204b0a80ffd37a0aa1296d9ed5c3ed5/userdata/shm DeviceMajor:0 DeviceMinor:1072 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-269 DeviceMajor:0 DeviceMinor:269 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e88b021c-c810-4a68-aa48-d8666b52330e/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:719 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/599418d3-6afa-46ab-9afa-659134f7ac94/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:1071 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd/volumes/kubernetes.io~projected/kube-api-access-nmv75 DeviceMajor:0 DeviceMinor:226 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-569 DeviceMajor:0 DeviceMinor:569 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/26feed0c101f6d451867599cf55613a680653ef7d844a071df5d94dd231f464f/userdata/shm DeviceMajor:0 DeviceMinor:253 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/a1f2b373-0c85-4028-9089-9e9dff5d37b5/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:473 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/599418d3-6afa-46ab-9afa-659134f7ac94/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1060 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/15b6612f-3a51-4a67-a566-8c520f85c6c2/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:486 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-508 DeviceMajor:0 DeviceMinor:508 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1037 DeviceMajor:0 DeviceMinor:1037 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/800297fe-77fd-4f58-ade2-32a147cd7d5c/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:422 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/5f827195-f68d-4bd2-865b-a1f041a5c73e/volumes/kubernetes.io~projected/kube-api-access-jndvw DeviceMajor:0 DeviceMinor:227 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:/var/lib/kubelet/pods/c5e43736-33c3-4949-98ca-971332541d64/volumes/kubernetes.io~projected/kube-api-access-sqjsq DeviceMajor:0 DeviceMinor:600 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-807 DeviceMajor:0 DeviceMinor:807 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d6ff7b83413c43450a6bf628dcc2a6106bc260e7200bd01ce6f1ed9cc232ecc2/userdata/shm DeviceMajor:0 DeviceMinor:264 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-830 DeviceMajor:0 DeviceMinor:830 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cdcd27a4-6d46-47af-a14a-65f6501c10f0/volumes/kubernetes.io~projected/kube-api-access-dfrbj DeviceMajor:0 DeviceMinor:753 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-1036 DeviceMajor:0 DeviceMinor:1036 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-550 DeviceMajor:0 DeviceMinor:550 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/95143c61-6f91-4cd4-9411-31c2fb75d4d0/volumes/kubernetes.io~projected/kube-api-access-8t9rq DeviceMajor:0 DeviceMinor:233 Capacity:32475525120 Type:vfs Inodes:4108169 HasInodes:true} {Device:overlay_0-834 DeviceMajor:0 DeviceMinor:834 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/317bca26800a314970aa73cabc27ffb650dc50aed545acb8b5a9d2409b853eae/userdata/shm DeviceMajor:0 DeviceMinor:1078 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2aab1c96f4b8ffa517d8d222973d3490b850d57a2945be4e4157f78f55403973/userdata/shm DeviceMajor:0 DeviceMinor:791 Capacity:67108864 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:0736cc0a2848f72 MacAddress:a2:92:fe:e8:4c:ac Speed:10000 Mtu:8900} {Name:08f21128e07d665 MacAddress:c2:ef:5b:40:00:c1 Speed:10000 Mtu:8900} {Name:1307b515e04cb83 MacAddress:f2:2e:42:ad:f4:28 Speed:10000 Mtu:8900} {Name:16a1ea739ab8f65 MacAddress:2e:e2:59:c7:6a:54 Speed:10000 Mtu:8900} {Name:16c8b28b1f6483c MacAddress:da:46:e0:a5:0a:ce Speed:10000 Mtu:8900} {Name:1e613a3e031cd6e MacAddress:7a:2b:ae:e1:bc:f8 Speed:10000 Mtu:8900} {Name:1fc4aaf36f3d357 MacAddress:ea:36:91:8b:2a:83 Speed:10000 Mtu:8900} {Name:2268116be19023b MacAddress:16:2e:14:59:a1:4d Speed:10000 Mtu:8900} {Name:26feed0c101f6d4 MacAddress:96:9d:7d:cf:e3:2c Speed:10000 Mtu:8900} {Name:2aab1c96f4b8ffa MacAddress:da:19:f1:4c:98:23 Speed:10000 Mtu:8900} {Name:2b116d558e216a6 MacAddress:16:8e:59:40:40:46 Speed:10000 Mtu:8900} {Name:30c4f18dcbcc9f1 MacAddress:12:2b:1a:7a:45:88 Speed:10000 Mtu:8900} {Name:317bca26800a314 MacAddress:4a:46:bc:b4:fd:7a Speed:10000 Mtu:8900} {Name:34190ff24c5d64d MacAddress:72:80:95:7b:c9:2e Speed:10000 Mtu:8900} {Name:345478a9f31c330 MacAddress:ca:8c:3f:48:86:a9 Speed:10000 Mtu:8900} {Name:3827efb6815dbb1 MacAddress:4e:a7:48:82:3d:d1 Speed:10000 Mtu:8900} {Name:3a452f53888d809 MacAddress:ae:07:b7:81:b1:34 Speed:10000 Mtu:8900} {Name:3ac5162bd81def3 MacAddress:ca:24:7f:3f:73:6e Speed:10000 Mtu:8900} {Name:3b274035f2ac7d4 MacAddress:56:26:39:5f:ad:db Speed:10000 Mtu:8900} {Name:3bf63c21f45da93 MacAddress:c2:60:b0:55:2f:06 Speed:10000 Mtu:8900} {Name:3ec66dd169d08be MacAddress:9a:8e:9f:86:ab:ec Speed:10000 Mtu:8900} {Name:4a9c798432c4910 MacAddress:f6:1e:da:31:0c:56 Speed:10000 Mtu:8900} {Name:5185a35bdc4ad19 MacAddress:6a:6d:dc:39:96:21 Speed:10000 Mtu:8900} {Name:57683f550936db1 MacAddress:ba:f5:b3:5b:0c:1b Speed:10000 Mtu:8900} {Name:62a17de80f64346 MacAddress:8a:4b:11:69:b3:5a Speed:10000 Mtu:8900} {Name:64e6daddf9e1c75 MacAddress:9a:2c:56:4c:70:ba Speed:10000 Mtu:8900} {Name:6f7fc65d624ce13 MacAddress:42:69:bd:40:9d:40 Speed:10000 Mtu:8900} {Name:74b42a82fad4fc0 MacAddress:7e:7a:3a:ee:08:f4 Speed:10000 Mtu:8900} {Name:78e813f78215ce3 MacAddress:06:30:b7:e2:75:60 Speed:10000 Mtu:8900} {Name:7d99052b3134ac6 MacAddress:ae:60:fa:08:3c:dc Speed:10000 Mtu:8900} {Name:8635320a4b36d9f MacAddress:82:30:49:f1:5c:ed Speed:10000 Mtu:8900} {Name:89bd968ec5efc46 MacAddress:0e:2b:2b:76:fa:5e Speed:10000 Mtu:8900} {Name:8aef2deed01150b MacAddress:2a:d0:48:ed:e4:7c Speed:10000 Mtu:8900} {Name:8e583348603a749 MacAddress:3e:04:73:45:21:42 Speed:10000 Mtu:8900} {Name:936c1c5ea7d8a03 MacAddress:ba:1b:42:0c:20:e3 Speed:10000 Mtu:8900} {Name:99b24b432d9d961 MacAddress:aa:7e:f7:d0:a2:7d Speed:10000 Mtu:8900} {Name:9fe02104a8ebb63 MacAddress:d6:4a:e0:53:78:26 Speed:10000 Mtu:8900} {Name:b5d41e3233b622c MacAddress:b2:a3:fd:91:01:eb Speed:10000 Mtu:8900} {Name:b72ac9942641491 MacAddress:5a:04:aa:94:5b:fc Speed:10000 Mtu:8900} {Name:b988232227aa085 MacAddress:56:53:9f:13:d4:7c Speed:10000 Mtu:8900} {Name:ba34b3933aeb088 MacAddress:3a:1f:aa:c9:6d:96 Speed:10000 Mtu:8900} {Name:bf2e729c77c8dcc MacAddress:6a:73:0a:25:96:2a Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:76:3c:3d:4a:0d:9f Speed:0 Mtu:8900} {Name:c3c61954e21feda MacAddress:4a:f1:94:b5:fe:13 Speed:10000 Mtu:8900} {Name:c44219a166b17d2 MacAddress:9e:56:72:40:f2:5b Speed:10000 Mtu:8900} {Name:c445746454631d8 MacAddress:d2:96:72:89:60:d8 Speed:10000 Mtu:8900} {Name:ca6a0275fcdb4ce MacAddress:46:b1:fe:f8:8a:b7 Speed:10000 Mtu:8900} {Name:d08575c558c437f MacAddress:0e:46:6d:1f:fb:2d Speed:10000 Mtu:8900} {Name:d6446762bc6a0b4 MacAddress:0a:de:bb:22:f2:5a Speed:10000 Mtu:8900} {Name:d6ccfac081e99c6 MacAddress:f2:b6:f4:10:01:af Speed:10000 Mtu:8900} {Name:d6ff7b83413c434 MacAddress:2a:84:4d:06:a3:82 Speed:10000 Mtu:8900} {Name:de5504f4eb957b5 MacAddress:b2:71:27:02:4a:54 Speed:10000 Mtu:8900} {Name:e917de8a6a8f9b1 MacAddress:0e:b5:ee:7b:97:75 Speed:10000 Mtu:8900} {Name:ea6882d5a369745 MacAddress:e6:44:a3:67:92:ec Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:21:a5:eb Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:b3:c6:d8 Speed:-1 Mtu:9000} {Name:f81c411903140f1 MacAddress:ba:d5:8a:b7:4d:67 Speed:10000 Mtu:8900} {Name:f98590df5fb100e MacAddress:da:4e:a9:de:f5:0d Speed:10000 Mtu:8900} {Name:fcb70fadbcfc61d MacAddress:0a:c0:83:72:75:df Speed:10000 Mtu:8900} {Name:fce4e249fbb76d0 MacAddress:92:d3:80:b1:06:8d Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:1a:32:43:41:d1:2f Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 18 09:03:32.663239 master-0 kubenswrapper[26053]: I0318 09:03:32.662667 26053 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 18 09:03:32.663239 master-0 kubenswrapper[26053]: I0318 09:03:32.662775 26053 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 18 09:03:32.663239 master-0 kubenswrapper[26053]: I0318 09:03:32.663165 26053 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 18 09:03:32.663685 master-0 kubenswrapper[26053]: I0318 09:03:32.663414 26053 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 18 09:03:32.663724 master-0 kubenswrapper[26053]: I0318 09:03:32.663440 26053 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 18 09:03:32.663771 master-0 kubenswrapper[26053]: I0318 09:03:32.663726 26053 topology_manager.go:138] "Creating topology manager with none policy" Mar 18 09:03:32.663771 master-0 kubenswrapper[26053]: I0318 09:03:32.663754 26053 container_manager_linux.go:303] "Creating device plugin manager" Mar 18 09:03:32.663832 master-0 kubenswrapper[26053]: I0318 09:03:32.663764 26053 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 09:03:32.663832 master-0 kubenswrapper[26053]: I0318 09:03:32.663807 26053 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 18 09:03:32.663881 master-0 kubenswrapper[26053]: I0318 09:03:32.663860 26053 state_mem.go:36] "Initialized new in-memory state store" Mar 18 09:03:32.664364 master-0 kubenswrapper[26053]: I0318 09:03:32.663951 26053 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 18 09:03:32.664364 master-0 kubenswrapper[26053]: I0318 09:03:32.664027 26053 kubelet.go:418] "Attempting to sync node with API server" Mar 18 09:03:32.664364 master-0 kubenswrapper[26053]: I0318 09:03:32.664040 26053 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 18 09:03:32.664364 master-0 kubenswrapper[26053]: I0318 09:03:32.664100 26053 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 18 09:03:32.664364 master-0 kubenswrapper[26053]: I0318 09:03:32.664114 26053 kubelet.go:324] "Adding apiserver pod source" Mar 18 09:03:32.664364 master-0 kubenswrapper[26053]: I0318 09:03:32.664129 26053 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 18 09:03:32.665309 master-0 kubenswrapper[26053]: I0318 09:03:32.665287 26053 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 18 09:03:32.665427 master-0 kubenswrapper[26053]: I0318 09:03:32.665394 26053 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 18 09:03:32.665935 master-0 kubenswrapper[26053]: I0318 09:03:32.665890 26053 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 18 09:03:32.665999 master-0 kubenswrapper[26053]: I0318 09:03:32.665990 26053 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 18 09:03:32.666034 master-0 kubenswrapper[26053]: I0318 09:03:32.666006 26053 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 18 09:03:32.666034 master-0 kubenswrapper[26053]: I0318 09:03:32.666014 26053 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 18 09:03:32.666034 master-0 kubenswrapper[26053]: I0318 09:03:32.666022 26053 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 18 09:03:32.666034 master-0 kubenswrapper[26053]: I0318 09:03:32.666028 26053 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 18 09:03:32.666034 master-0 kubenswrapper[26053]: I0318 09:03:32.666035 26053 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 18 09:03:32.666154 master-0 kubenswrapper[26053]: I0318 09:03:32.666042 26053 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 18 09:03:32.666154 master-0 kubenswrapper[26053]: I0318 09:03:32.666049 26053 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 18 09:03:32.666154 master-0 kubenswrapper[26053]: I0318 09:03:32.666057 26053 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 18 09:03:32.666154 master-0 kubenswrapper[26053]: I0318 09:03:32.666064 26053 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 18 09:03:32.666154 master-0 kubenswrapper[26053]: I0318 09:03:32.666074 26053 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 18 09:03:32.666154 master-0 kubenswrapper[26053]: I0318 09:03:32.666085 26053 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 18 09:03:32.666154 master-0 kubenswrapper[26053]: I0318 09:03:32.666106 26053 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 18 09:03:32.666421 master-0 kubenswrapper[26053]: I0318 09:03:32.666396 26053 server.go:1280] "Started kubelet" Mar 18 09:03:32.666987 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 18 09:03:32.667921 master-0 kubenswrapper[26053]: I0318 09:03:32.667853 26053 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 18 09:03:32.669074 master-0 kubenswrapper[26053]: I0318 09:03:32.669052 26053 server.go:449] "Adding debug handlers to kubelet server" Mar 18 09:03:32.671192 master-0 kubenswrapper[26053]: I0318 09:03:32.667853 26053 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 18 09:03:32.671272 master-0 kubenswrapper[26053]: I0318 09:03:32.671215 26053 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 18 09:03:32.672198 master-0 kubenswrapper[26053]: I0318 09:03:32.672152 26053 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 18 09:03:32.685283 master-0 kubenswrapper[26053]: E0318 09:03:32.685222 26053 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 18 09:03:32.686102 master-0 kubenswrapper[26053]: I0318 09:03:32.686079 26053 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 18 09:03:32.686191 master-0 kubenswrapper[26053]: I0318 09:03:32.686171 26053 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 18 09:03:32.686414 master-0 kubenswrapper[26053]: I0318 09:03:32.686384 26053 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-19 08:38:39 +0000 UTC, rotation deadline is 2026-03-19 04:08:07.017873658 +0000 UTC Mar 18 09:03:32.686414 master-0 kubenswrapper[26053]: I0318 09:03:32.686412 26053 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h4m34.331464333s for next certificate rotation Mar 18 09:03:32.686509 master-0 kubenswrapper[26053]: E0318 09:03:32.686470 26053 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:03:32.686509 master-0 kubenswrapper[26053]: I0318 09:03:32.686482 26053 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 18 09:03:32.686509 master-0 kubenswrapper[26053]: I0318 09:03:32.686494 26053 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 18 09:03:32.686669 master-0 kubenswrapper[26053]: I0318 09:03:32.686555 26053 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 18 09:03:32.687485 master-0 kubenswrapper[26053]: I0318 09:03:32.687465 26053 factory.go:153] Registering CRI-O factory Mar 18 09:03:32.687485 master-0 kubenswrapper[26053]: I0318 09:03:32.687485 26053 factory.go:221] Registration of the crio container factory successfully Mar 18 09:03:32.687623 master-0 kubenswrapper[26053]: I0318 09:03:32.687541 26053 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 18 09:03:32.687623 master-0 kubenswrapper[26053]: I0318 09:03:32.687550 26053 factory.go:55] Registering systemd factory Mar 18 09:03:32.687623 master-0 kubenswrapper[26053]: I0318 09:03:32.687556 26053 factory.go:221] Registration of the systemd container factory successfully Mar 18 09:03:32.687623 master-0 kubenswrapper[26053]: I0318 09:03:32.687584 26053 factory.go:103] Registering Raw factory Mar 18 09:03:32.687623 master-0 kubenswrapper[26053]: I0318 09:03:32.687598 26053 manager.go:1196] Started watching for new ooms in manager Mar 18 09:03:32.688042 master-0 kubenswrapper[26053]: I0318 09:03:32.688008 26053 manager.go:319] Starting recovery of all containers Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.693901 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="600c92a1-56c5-497b-a8f0-746830f4180e" volumeName="kubernetes.io/configmap/600c92a1-56c5-497b-a8f0-746830f4180e-iptables-alerter-script" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694007 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7cac1300-44c1-4a7d-8d14-efa9702ad9df" volumeName="kubernetes.io/secret/7cac1300-44c1-4a7d-8d14-efa9702ad9df-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694022 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be2682e4-cb63-4102-a83e-ef28023e273a" volumeName="kubernetes.io/projected/be2682e4-cb63-4102-a83e-ef28023e273a-kube-api-access-nmztj" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694033 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="93cb5ef1-e8f1-4d11-8c93-1abf24626176" volumeName="kubernetes.io/secret/93cb5ef1-e8f1-4d11-8c93-1abf24626176-stats-auth" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694046 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94e2a8f0-2c2e-43da-9fa9-69edfcd77830" volumeName="kubernetes.io/configmap/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-auth-proxy-config" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694056 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1f2b373-0c85-4028-9089-9e9dff5d37b5" volumeName="kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-image-import-ca" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694067 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b2588f5c-327c-49cc-8cfb-0cce1ad758d5" volumeName="kubernetes.io/configmap/b2588f5c-327c-49cc-8cfb-0cce1ad758d5-config-volume" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694077 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="411d544f-e105-44f0-927a-f61406b3f070" volumeName="kubernetes.io/projected/411d544f-e105-44f0-927a-f61406b3f070-kube-api-access-t4l97" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694091 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8dacdedc-c6ad-40d4-afdc-59a31be417fe" volumeName="kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovnkube-script-lib" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694114 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8dacdedc-c6ad-40d4-afdc-59a31be417fe" volumeName="kubernetes.io/projected/8dacdedc-c6ad-40d4-afdc-59a31be417fe-kube-api-access-g97kq" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694122 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8dacdedc-c6ad-40d4-afdc-59a31be417fe" volumeName="kubernetes.io/secret/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovn-node-metrics-cert" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694131 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb6ef4c4-bff3-4559-8e42-582bbd668b7c" volumeName="kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-config" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694140 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e2af879e-1465-40bf-bf72-30c7e89386a3" volumeName="kubernetes.io/projected/e2af879e-1465-40bf-bf72-30c7e89386a3-kube-api-access" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694151 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c56e1ac-8752-4e46-8692-93716087f0e0" volumeName="kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694165 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e869b45-8ca6-485f-8b6f-b2fad3b02efe" volumeName="kubernetes.io/secret/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-serving-cert" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694176 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7cac1300-44c1-4a7d-8d14-efa9702ad9df" volumeName="kubernetes.io/configmap/7cac1300-44c1-4a7d-8d14-efa9702ad9df-env-overrides" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694193 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87381a51-96e6-4e86-bdae-c8ac3fc7a039" volumeName="kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-client-ca-bundle" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694205 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb6ef4c4-bff3-4559-8e42-582bbd668b7c" volumeName="kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-ca" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694217 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65cff83a-8d8f-4e4f-96ef-99941c29ba53" volumeName="kubernetes.io/secret/65cff83a-8d8f-4e4f-96ef-99941c29ba53-serving-cert" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694229 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b7ac7ef-060f-45d2-8988-006d45402e00" volumeName="kubernetes.io/projected/7b7ac7ef-060f-45d2-8988-006d45402e00-kube-api-access-qkx4s" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694240 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1f2b373-0c85-4028-9089-9e9dff5d37b5" volumeName="kubernetes.io/secret/a1f2b373-0c85-4028-9089-9e9dff5d37b5-encryption-config" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694251 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cda44dd8-895a-4eab-bedc-83f38efa2482" volumeName="kubernetes.io/empty-dir/cda44dd8-895a-4eab-bedc-83f38efa2482-tmp" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694291 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e869b45-8ca6-485f-8b6f-b2fad3b02efe" volumeName="kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-proxy-ca-bundles" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694306 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09269324-c908-474d-818f-5cd49406f1e2" volumeName="kubernetes.io/projected/09269324-c908-474d-818f-5cd49406f1e2-kube-api-access-94zpt" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694317 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8683c8c6-3a77-4b46-8898-142f9781b49c" volumeName="kubernetes.io/secret/8683c8c6-3a77-4b46-8898-142f9781b49c-prometheus-operator-tls" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694345 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="93cb5ef1-e8f1-4d11-8c93-1abf24626176" volumeName="kubernetes.io/projected/93cb5ef1-e8f1-4d11-8c93-1abf24626176-kube-api-access-xt64s" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694356 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2fcd92f-0a58-4c87-8213-715453486aca" volumeName="kubernetes.io/empty-dir/f2fcd92f-0a58-4c87-8213-715453486aca-utilities" seLinuxMountContext="" Mar 18 09:03:32.694313 master-0 kubenswrapper[26053]: I0318 09:03:32.694366 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8683c8c6-3a77-4b46-8898-142f9781b49c" volumeName="kubernetes.io/configmap/8683c8c6-3a77-4b46-8898-142f9781b49c-metrics-client-ca" seLinuxMountContext="" Mar 18 09:03:32.695539 master-0 kubenswrapper[26053]: I0318 09:03:32.694376 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c00ee838-424f-482b-942f-08f0952a5ccd" volumeName="kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert" seLinuxMountContext="" Mar 18 09:03:32.695539 master-0 kubenswrapper[26053]: I0318 09:03:32.694388 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7205eeb-912b-4c31-b08f-ed0b2a1319aa" volumeName="kubernetes.io/projected/d7205eeb-912b-4c31-b08f-ed0b2a1319aa-kube-api-access-ddsnb" seLinuxMountContext="" Mar 18 09:03:32.695539 master-0 kubenswrapper[26053]: I0318 09:03:32.694400 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e48101ca-f356-45e3-93d7-4e17b8d8066c" volumeName="kubernetes.io/projected/e48101ca-f356-45e3-93d7-4e17b8d8066c-kube-api-access-47cpd" seLinuxMountContext="" Mar 18 09:03:32.695539 master-0 kubenswrapper[26053]: I0318 09:03:32.694417 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f918d08d-df7c-4e8d-85ba-1c92d766db16" volumeName="kubernetes.io/empty-dir/f918d08d-df7c-4e8d-85ba-1c92d766db16-snapshots" seLinuxMountContext="" Mar 18 09:03:32.695539 master-0 kubenswrapper[26053]: I0318 09:03:32.694430 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a864188-ada6-4ec2-bf9f-72dab210f0ce" volumeName="kubernetes.io/secret/2a864188-ada6-4ec2-bf9f-72dab210f0ce-cluster-storage-operator-serving-cert" seLinuxMountContext="" Mar 18 09:03:32.695539 master-0 kubenswrapper[26053]: I0318 09:03:32.694442 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b59dbf5-0a61-4981-aed3-e73550615c4a" volumeName="kubernetes.io/configmap/2b59dbf5-0a61-4981-aed3-e73550615c4a-metrics-client-ca" seLinuxMountContext="" Mar 18 09:03:32.695539 master-0 kubenswrapper[26053]: I0318 09:03:32.694452 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="411d544f-e105-44f0-927a-f61406b3f070" volumeName="kubernetes.io/empty-dir/411d544f-e105-44f0-927a-f61406b3f070-cache" seLinuxMountContext="" Mar 18 09:03:32.695539 master-0 kubenswrapper[26053]: I0318 09:03:32.694463 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4192ea44-a38c-4b70-93c3-8070da2ffe2f" volumeName="kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls" seLinuxMountContext="" Mar 18 09:03:32.695539 master-0 kubenswrapper[26053]: I0318 09:03:32.694473 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14489ef7-8df3-4a3b-a137-3a78e89d425b" volumeName="kubernetes.io/secret/14489ef7-8df3-4a3b-a137-3a78e89d425b-certs" seLinuxMountContext="" Mar 18 09:03:32.695539 master-0 kubenswrapper[26053]: I0318 09:03:32.694482 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="995ec82c-b593-416a-9287-6020a484855c" volumeName="kubernetes.io/empty-dir/995ec82c-b593-416a-9287-6020a484855c-catalog-content" seLinuxMountContext="" Mar 18 09:03:32.695539 master-0 kubenswrapper[26053]: I0318 09:03:32.694491 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e88b021c-c810-4a68-aa48-d8666b52330e" volumeName="kubernetes.io/secret/e88b021c-c810-4a68-aa48-d8666b52330e-cert" seLinuxMountContext="" Mar 18 09:03:32.695539 master-0 kubenswrapper[26053]: I0318 09:03:32.694499 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1df9560e-21f0-44fe-bb51-4bc0fde4a3ac" volumeName="kubernetes.io/secret/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-serving-cert" seLinuxMountContext="" Mar 18 09:03:32.695539 master-0 kubenswrapper[26053]: I0318 09:03:32.694509 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8dacdedc-c6ad-40d4-afdc-59a31be417fe" volumeName="kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovnkube-config" seLinuxMountContext="" Mar 18 09:03:32.695539 master-0 kubenswrapper[26053]: I0318 09:03:32.694517 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94e2a8f0-2c2e-43da-9fa9-69edfcd77830" volumeName="kubernetes.io/projected/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-kube-api-access-mr9zx" seLinuxMountContext="" Mar 18 09:03:32.695539 master-0 kubenswrapper[26053]: I0318 09:03:32.694526 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5c995cf-40a0-4cd6-87fa-96a522f7bc57" volumeName="kubernetes.io/projected/c5c995cf-40a0-4cd6-87fa-96a522f7bc57-kube-api-access-rm2rc" seLinuxMountContext="" Mar 18 09:03:32.695539 master-0 kubenswrapper[26053]: I0318 09:03:32.694672 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="50a2c23f-26af-4c7f-8ea6-996bcfe173d0" volumeName="kubernetes.io/secret/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-apiservice-cert" seLinuxMountContext="" Mar 18 09:03:32.695539 master-0 kubenswrapper[26053]: I0318 09:03:32.694694 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57affd8b-d1ce-40d2-b31e-7b18645ca7b6" volumeName="kubernetes.io/configmap/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-ovnkube-identity-cm" seLinuxMountContext="" Mar 18 09:03:32.695539 master-0 kubenswrapper[26053]: I0318 09:03:32.694706 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd" volumeName="kubernetes.io/projected/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd-kube-api-access-j5nwv" seLinuxMountContext="" Mar 18 09:03:32.695539 master-0 kubenswrapper[26053]: I0318 09:03:32.694716 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b2588f5c-327c-49cc-8cfb-0cce1ad758d5" volumeName="kubernetes.io/secret/b2588f5c-327c-49cc-8cfb-0cce1ad758d5-metrics-tls" seLinuxMountContext="" Mar 18 09:03:32.695539 master-0 kubenswrapper[26053]: I0318 09:03:32.694728 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fdb52116-9c55-4464-99c8-fc2e4559996b" volumeName="kubernetes.io/configmap/fdb52116-9c55-4464-99c8-fc2e4559996b-images" seLinuxMountContext="" Mar 18 09:03:32.704049 master-0 kubenswrapper[26053]: I0318 09:03:32.694737 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e869b45-8ca6-485f-8b6f-b2fad3b02efe" volumeName="kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-config" seLinuxMountContext="" Mar 18 09:03:32.704208 master-0 kubenswrapper[26053]: I0318 09:03:32.704056 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81eefe1b-f683-4740-8fb0-0a5050f9b4a4" volumeName="kubernetes.io/configmap/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-config" seLinuxMountContext="" Mar 18 09:03:32.704208 master-0 kubenswrapper[26053]: I0318 09:03:32.704108 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="995ec82c-b593-416a-9287-6020a484855c" volumeName="kubernetes.io/projected/995ec82c-b593-416a-9287-6020a484855c-kube-api-access-4q4k8" seLinuxMountContext="" Mar 18 09:03:32.704208 master-0 kubenswrapper[26053]: I0318 09:03:32.704130 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb6ef4c4-bff3-4559-8e42-582bbd668b7c" volumeName="kubernetes.io/projected/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-kube-api-access-f2mj5" seLinuxMountContext="" Mar 18 09:03:32.704208 master-0 kubenswrapper[26053]: I0318 09:03:32.704153 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1f2b373-0c85-4028-9089-9e9dff5d37b5" volumeName="kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-audit" seLinuxMountContext="" Mar 18 09:03:32.704208 master-0 kubenswrapper[26053]: I0318 09:03:32.704165 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15b6612f-3a51-4a67-a566-8c520f85c6c2" volumeName="kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-encryption-config" seLinuxMountContext="" Mar 18 09:03:32.704208 master-0 kubenswrapper[26053]: I0318 09:03:32.704180 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5f827195-f68d-4bd2-865b-a1f041a5c73e" volumeName="kubernetes.io/projected/5f827195-f68d-4bd2-865b-a1f041a5c73e-kube-api-access-jndvw" seLinuxMountContext="" Mar 18 09:03:32.704208 master-0 kubenswrapper[26053]: I0318 09:03:32.704193 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="680006ef-a955-491e-b6a3-1ca7fcc20165" volumeName="kubernetes.io/projected/680006ef-a955-491e-b6a3-1ca7fcc20165-kube-api-access-kkfms" seLinuxMountContext="" Mar 18 09:03:32.704208 master-0 kubenswrapper[26053]: I0318 09:03:32.704212 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9cc640bf-cb5f-4493-b47b-6ea6f524525e" volumeName="kubernetes.io/secret/9cc640bf-cb5f-4493-b47b-6ea6f524525e-serving-cert" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704225 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b7ac7ef-060f-45d2-8988-006d45402e00" volumeName="kubernetes.io/configmap/7b7ac7ef-060f-45d2-8988-006d45402e00-config" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704239 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f9ba06c-7a6b-4f46-a747-80b0a0b58600" volumeName="kubernetes.io/configmap/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-config" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704248 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca9d4694-8675-47c5-819f-89bba9dcdc0f" volumeName="kubernetes.io/configmap/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-trusted-ca" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704262 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f918d08d-df7c-4e8d-85ba-1c92d766db16" volumeName="kubernetes.io/secret/f918d08d-df7c-4e8d-85ba-1c92d766db16-serving-cert" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704272 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15b6612f-3a51-4a67-a566-8c520f85c6c2" volumeName="kubernetes.io/configmap/15b6612f-3a51-4a67-a566-8c520f85c6c2-audit-policies" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704281 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="50a2c23f-26af-4c7f-8ea6-996bcfe173d0" volumeName="kubernetes.io/secret/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-webhook-cert" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704293 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="800297fe-77fd-4f58-ade2-32a147cd7d5c" volumeName="kubernetes.io/empty-dir/800297fe-77fd-4f58-ade2-32a147cd7d5c-cache" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704302 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8683c8c6-3a77-4b46-8898-142f9781b49c" volumeName="kubernetes.io/projected/8683c8c6-3a77-4b46-8898-142f9781b49c-kube-api-access-g42f4" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704315 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1df9560e-21f0-44fe-bb51-4bc0fde4a3ac" volumeName="kubernetes.io/projected/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-kube-api-access" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704324 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e86268c9-7a83-4ccb-979a-feff00cb4b3e" volumeName="kubernetes.io/projected/e86268c9-7a83-4ccb-979a-feff00cb4b3e-kube-api-access-ptdsp" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704333 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cdcd27a4-6d46-47af-a14a-65f6501c10f0" volumeName="kubernetes.io/configmap/cdcd27a4-6d46-47af-a14a-65f6501c10f0-config" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704345 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c322813-b574-4b46-b760-208ccecd01a5" volumeName="kubernetes.io/empty-dir/1c322813-b574-4b46-b760-208ccecd01a5-utilities" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704355 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c322813-b574-4b46-b760-208ccecd01a5" volumeName="kubernetes.io/empty-dir/1c322813-b574-4b46-b760-208ccecd01a5-catalog-content" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704368 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9cc640bf-cb5f-4493-b47b-6ea6f524525e" volumeName="kubernetes.io/configmap/9cc640bf-cb5f-4493-b47b-6ea6f524525e-service-ca" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704376 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf5fd4cc-959e-4878-82e9-b0f90dba6553" volumeName="kubernetes.io/empty-dir/bf5fd4cc-959e-4878-82e9-b0f90dba6553-utilities" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704385 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d0da6e3-3887-4361-8eae-e7447f9ff72c" volumeName="kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704398 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b7ac7ef-060f-45d2-8988-006d45402e00" volumeName="kubernetes.io/configmap/7b7ac7ef-060f-45d2-8988-006d45402e00-client-ca" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704407 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca9d4694-8675-47c5-819f-89bba9dcdc0f" volumeName="kubernetes.io/projected/ca9d4694-8675-47c5-819f-89bba9dcdc0f-kube-api-access-rx9dd" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704419 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd" volumeName="kubernetes.io/secret/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-serving-cert" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704429 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7205eeb-912b-4c31-b08f-ed0b2a1319aa" volumeName="kubernetes.io/configmap/d7205eeb-912b-4c31-b08f-ed0b2a1319aa-mcc-auth-proxy-config" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704439 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f6a7f55-84bd-4ea5-8248-4cb565904c3b" volumeName="kubernetes.io/projected/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-kube-api-access-lnfwv" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704451 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1deb139f-1903-417e-835c-28abdd156cdb" volumeName="kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704460 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e869b45-8ca6-485f-8b6f-b2fad3b02efe" volumeName="kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-client-ca" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704469 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1f2b373-0c85-4028-9089-9e9dff5d37b5" volumeName="kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-trusted-ca-bundle" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704481 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f6a7f55-84bd-4ea5-8248-4cb565904c3b" volumeName="kubernetes.io/configmap/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-config" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704491 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d0da6e3-3887-4361-8eae-e7447f9ff72c" volumeName="kubernetes.io/projected/2d0da6e3-3887-4361-8eae-e7447f9ff72c-kube-api-access-xkw45" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704522 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1f2b373-0c85-4028-9089-9e9dff5d37b5" volumeName="kubernetes.io/secret/a1f2b373-0c85-4028-9089-9e9dff5d37b5-etcd-client" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704533 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fdb52116-9c55-4464-99c8-fc2e4559996b" volumeName="kubernetes.io/projected/fdb52116-9c55-4464-99c8-fc2e4559996b-kube-api-access-xzrxv" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704543 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15798f4d-8bcc-4e24-bb18-8dff1f4edf59" volumeName="kubernetes.io/empty-dir/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-volume-directive-shadow" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704555 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd" volumeName="kubernetes.io/secret/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd-cert" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704577 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0cd1cf7-be6f-4baf-8761-69c693476de9" volumeName="kubernetes.io/projected/a0cd1cf7-be6f-4baf-8761-69c693476de9-kube-api-access-2ggjn" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704591 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cda44dd8-895a-4eab-bedc-83f38efa2482" volumeName="kubernetes.io/empty-dir/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-tuned" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704604 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1c322813-b574-4b46-b760-208ccecd01a5" volumeName="kubernetes.io/projected/1c322813-b574-4b46-b760-208ccecd01a5-kube-api-access-9fbs4" seLinuxMountContext="" Mar 18 09:03:32.704532 master-0 kubenswrapper[26053]: I0318 09:03:32.704614 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="93cb5ef1-e8f1-4d11-8c93-1abf24626176" volumeName="kubernetes.io/secret/93cb5ef1-e8f1-4d11-8c93-1abf24626176-metrics-certs" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704628 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14489ef7-8df3-4a3b-a137-3a78e89d425b" volumeName="kubernetes.io/secret/14489ef7-8df3-4a3b-a137-3a78e89d425b-node-bootstrap-token" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704637 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b779ce3-07c4-45ca-b1ca-750c95ed3d0b" volumeName="kubernetes.io/projected/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-kube-api-access-wxgx6" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704649 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0cd1cf7-be6f-4baf-8761-69c693476de9" volumeName="kubernetes.io/secret/a0cd1cf7-be6f-4baf-8761-69c693476de9-cloud-credential-operator-serving-cert" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704657 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f918d08d-df7c-4e8d-85ba-1c92d766db16" volumeName="kubernetes.io/projected/f918d08d-df7c-4e8d-85ba-1c92d766db16-kube-api-access-l6p7s" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704685 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cdcd27a4-6d46-47af-a14a-65f6501c10f0" volumeName="kubernetes.io/configmap/cdcd27a4-6d46-47af-a14a-65f6501c10f0-auth-proxy-config" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704697 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e48101ca-f356-45e3-93d7-4e17b8d8066c" volumeName="kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704706 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1df9560e-21f0-44fe-bb51-4bc0fde4a3ac" volumeName="kubernetes.io/configmap/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-config" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704718 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8dacdedc-c6ad-40d4-afdc-59a31be417fe" volumeName="kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-env-overrides" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704726 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="95143c61-6f91-4cd4-9411-31c2fb75d4d0" volumeName="kubernetes.io/projected/95143c61-6f91-4cd4-9411-31c2fb75d4d0-kube-api-access-8t9rq" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704735 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af1fbcf2-d4de-4015-89fc-2565e855a04d" volumeName="kubernetes.io/configmap/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-daemon-config" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704746 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb8f3615-9e89-4b51-87a2-7d168c81adf3" volumeName="kubernetes.io/secret/eb8f3615-9e89-4b51-87a2-7d168c81adf3-cluster-baremetal-operator-tls" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704755 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f6833a48-fccb-42bd-ac90-29f08d5bf7e8" volumeName="kubernetes.io/projected/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-kube-api-access-257nx" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704766 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="599418d3-6afa-46ab-9afa-659134f7ac94" volumeName="kubernetes.io/secret/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-tls" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704780 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87381a51-96e6-4e86-bdae-c8ac3fc7a039" volumeName="kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-metrics-server-audit-profiles" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704800 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bef948b9-eef4-404b-9b49-6e4a2ceea73b" volumeName="kubernetes.io/secret/bef948b9-eef4-404b-9b49-6e4a2ceea73b-proxy-tls" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704814 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e88b021c-c810-4a68-aa48-d8666b52330e" volumeName="kubernetes.io/configmap/e88b021c-c810-4a68-aa48-d8666b52330e-auth-proxy-config" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704825 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87381a51-96e6-4e86-bdae-c8ac3fc7a039" volumeName="kubernetes.io/empty-dir/87381a51-96e6-4e86-bdae-c8ac3fc7a039-audit-log" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704839 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e919445-81d0-4663-8941-f596d8121305" volumeName="kubernetes.io/projected/4e919445-81d0-4663-8941-f596d8121305-kube-api-access-kwp9m" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704849 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="93cb5ef1-e8f1-4d11-8c93-1abf24626176" volumeName="kubernetes.io/configmap/93cb5ef1-e8f1-4d11-8c93-1abf24626176-service-ca-bundle" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704862 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf5fd4cc-959e-4878-82e9-b0f90dba6553" volumeName="kubernetes.io/projected/bf5fd4cc-959e-4878-82e9-b0f90dba6553-kube-api-access-r4jq4" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704874 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cdcd27a4-6d46-47af-a14a-65f6501c10f0" volumeName="kubernetes.io/projected/cdcd27a4-6d46-47af-a14a-65f6501c10f0-kube-api-access-dfrbj" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704888 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="95143c61-6f91-4cd4-9411-31c2fb75d4d0" volumeName="kubernetes.io/empty-dir/95143c61-6f91-4cd4-9411-31c2fb75d4d0-available-featuregates" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704899 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bef948b9-eef4-404b-9b49-6e4a2ceea73b" volumeName="kubernetes.io/configmap/bef948b9-eef4-404b-9b49-6e4a2ceea73b-auth-proxy-config" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704912 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf7a3329-a04c-4b58-9364-b907c00cbe08" volumeName="kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704923 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fdb52116-9c55-4464-99c8-fc2e4559996b" volumeName="kubernetes.io/secret/fdb52116-9c55-4464-99c8-fc2e4559996b-machine-api-operator-tls" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704932 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1f2b373-0c85-4028-9089-9e9dff5d37b5" volumeName="kubernetes.io/secret/a1f2b373-0c85-4028-9089-9e9dff5d37b5-serving-cert" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704944 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be2682e4-cb63-4102-a83e-ef28023e273a" volumeName="kubernetes.io/secret/be2682e4-cb63-4102-a83e-ef28023e273a-serving-cert" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704953 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="411d544f-e105-44f0-927a-f61406b3f070" volumeName="kubernetes.io/projected/411d544f-e105-44f0-927a-f61406b3f070-ca-certs" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704965 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7cac1300-44c1-4a7d-8d14-efa9702ad9df" volumeName="kubernetes.io/configmap/7cac1300-44c1-4a7d-8d14-efa9702ad9df-ovnkube-config" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704974 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7cac1300-44c1-4a7d-8d14-efa9702ad9df" volumeName="kubernetes.io/projected/7cac1300-44c1-4a7d-8d14-efa9702ad9df-kube-api-access-7dn5k" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704984 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1f2b373-0c85-4028-9089-9e9dff5d37b5" volumeName="kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-config" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.704995 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8683c8c6-3a77-4b46-8898-142f9781b49c" volumeName="kubernetes.io/secret/8683c8c6-3a77-4b46-8898-142f9781b49c-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.705004 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9cc640bf-cb5f-4493-b47b-6ea6f524525e" volumeName="kubernetes.io/projected/9cc640bf-cb5f-4493-b47b-6ea6f524525e-kube-api-access" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.705016 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15798f4d-8bcc-4e24-bb18-8dff1f4edf59" volumeName="kubernetes.io/secret/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.705397 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25781967-12ce-490e-94aa-9b9722f495da" volumeName="kubernetes.io/secret/25781967-12ce-490e-94aa-9b9722f495da-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.705417 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="600c92a1-56c5-497b-a8f0-746830f4180e" volumeName="kubernetes.io/projected/600c92a1-56c5-497b-a8f0-746830f4180e-kube-api-access-m9mh7" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.705431 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac" volumeName="kubernetes.io/projected/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-kube-api-access-77sfj" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.705440 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65cff83a-8d8f-4e4f-96ef-99941c29ba53" volumeName="kubernetes.io/configmap/65cff83a-8d8f-4e4f-96ef-99941c29ba53-config" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.705449 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cdf1c657-a9dc-455a-b2fd-27a518bc5199" volumeName="kubernetes.io/secret/cdf1c657-a9dc-455a-b2fd-27a518bc5199-tls-certificates" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.705461 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f6833a48-fccb-42bd-ac90-29f08d5bf7e8" volumeName="kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.705470 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15798f4d-8bcc-4e24-bb18-8dff1f4edf59" volumeName="kubernetes.io/configmap/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-metrics-client-ca" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.705483 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b59dbf5-0a61-4981-aed3-e73550615c4a" volumeName="kubernetes.io/secret/2b59dbf5-0a61-4981-aed3-e73550615c4a-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.705516 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b59dbf5-0a61-4981-aed3-e73550615c4a" volumeName="kubernetes.io/secret/2b59dbf5-0a61-4981-aed3-e73550615c4a-openshift-state-metrics-tls" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.705525 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb6ef4c4-bff3-4559-8e42-582bbd668b7c" volumeName="kubernetes.io/secret/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-serving-cert" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.705537 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7cf2cff-ca67-4cc6-99e7-99478ab89af4" volumeName="kubernetes.io/configmap/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-mcd-auth-proxy-config" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.705720 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd" volumeName="kubernetes.io/projected/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-kube-api-access-nmv75" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.705741 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09269324-c908-474d-818f-5cd49406f1e2" volumeName="kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.705891 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e869b45-8ca6-485f-8b6f-b2fad3b02efe" volumeName="kubernetes.io/projected/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-kube-api-access-xjv4l" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.705906 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81eefe1b-f683-4740-8fb0-0a5050f9b4a4" volumeName="kubernetes.io/projected/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-kube-api-access-qkkcv" seLinuxMountContext="" Mar 18 09:03:32.705848 master-0 kubenswrapper[26053]: I0318 09:03:32.705920 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c00ee838-424f-482b-942f-08f0952a5ccd" volumeName="kubernetes.io/projected/c00ee838-424f-482b-942f-08f0952a5ccd-kube-api-access-9w4w9" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.705936 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cda44dd8-895a-4eab-bedc-83f38efa2482" volumeName="kubernetes.io/projected/cda44dd8-895a-4eab-bedc-83f38efa2482-kube-api-access-bxshz" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.705949 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb6ef4c4-bff3-4559-8e42-582bbd668b7c" volumeName="kubernetes.io/secret/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-client" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.705966 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf7a3329-a04c-4b58-9364-b907c00cbe08" volumeName="kubernetes.io/configmap/bf7a3329-a04c-4b58-9364-b907c00cbe08-trusted-ca" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.705980 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e86268c9-7a83-4ccb-979a-feff00cb4b3e" volumeName="kubernetes.io/secret/e86268c9-7a83-4ccb-979a-feff00cb4b3e-serving-cert" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.705994 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f198f770-5483-4499-abb6-06026f2c6b37" volumeName="kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.706010 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="50a2c23f-26af-4c7f-8ea6-996bcfe173d0" volumeName="kubernetes.io/empty-dir/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-tmpfs" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.706023 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="680006ef-a955-491e-b6a3-1ca7fcc20165" volumeName="kubernetes.io/configmap/680006ef-a955-491e-b6a3-1ca7fcc20165-signing-cabundle" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.706039 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af1fbcf2-d4de-4015-89fc-2565e855a04d" volumeName="kubernetes.io/projected/af1fbcf2-d4de-4015-89fc-2565e855a04d-kube-api-access-r5svd" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.706055 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f918d08d-df7c-4e8d-85ba-1c92d766db16" volumeName="kubernetes.io/configmap/f918d08d-df7c-4e8d-85ba-1c92d766db16-trusted-ca-bundle" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.706076 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf7a3329-a04c-4b58-9364-b907c00cbe08" volumeName="kubernetes.io/projected/bf7a3329-a04c-4b58-9364-b907c00cbe08-kube-api-access-2plvj" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707254 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e86268c9-7a83-4ccb-979a-feff00cb4b3e" volumeName="kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-config" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707366 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4192ea44-a38c-4b70-93c3-8070da2ffe2f" volumeName="kubernetes.io/projected/4192ea44-a38c-4b70-93c3-8070da2ffe2f-kube-api-access-gp84d" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707386 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94e2a8f0-2c2e-43da-9fa9-69edfcd77830" volumeName="kubernetes.io/configmap/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-images" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707401 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7cf2cff-ca67-4cc6-99e7-99478ab89af4" volumeName="kubernetes.io/secret/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-proxy-tls" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707417 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb6ef4c4-bff3-4559-8e42-582bbd668b7c" volumeName="kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-service-ca" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707432 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f6a7f55-84bd-4ea5-8248-4cb565904c3b" volumeName="kubernetes.io/secret/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-serving-cert" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707448 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15b6612f-3a51-4a67-a566-8c520f85c6c2" volumeName="kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-serving-cert" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707464 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a864188-ada6-4ec2-bf9f-72dab210f0ce" volumeName="kubernetes.io/projected/2a864188-ada6-4ec2-bf9f-72dab210f0ce-kube-api-access-csfl2" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707479 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15b6612f-3a51-4a67-a566-8c520f85c6c2" volumeName="kubernetes.io/configmap/15b6612f-3a51-4a67-a566-8c520f85c6c2-etcd-serving-ca" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707494 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af1fbcf2-d4de-4015-89fc-2565e855a04d" volumeName="kubernetes.io/configmap/af1fbcf2-d4de-4015-89fc-2565e855a04d-cni-binary-copy" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707510 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb8f3615-9e89-4b51-87a2-7d168c81adf3" volumeName="kubernetes.io/projected/eb8f3615-9e89-4b51-87a2-7d168c81adf3-kube-api-access-mj95l" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707527 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fdb52116-9c55-4464-99c8-fc2e4559996b" volumeName="kubernetes.io/configmap/fdb52116-9c55-4464-99c8-fc2e4559996b-config" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707541 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c56e1ac-8752-4e46-8692-93716087f0e0" volumeName="kubernetes.io/projected/6c56e1ac-8752-4e46-8692-93716087f0e0-kube-api-access-rppm6" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707556 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be2682e4-cb63-4102-a83e-ef28023e273a" volumeName="kubernetes.io/configmap/be2682e4-cb63-4102-a83e-ef28023e273a-config" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707611 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6176328-5931-405b-8519-8e4bc83bedfb" volumeName="kubernetes.io/projected/c6176328-5931-405b-8519-8e4bc83bedfb-kube-api-access-5zx99" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707636 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7cf2cff-ca67-4cc6-99e7-99478ab89af4" volumeName="kubernetes.io/projected/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-kube-api-access-vhdc2" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707655 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94e2a8f0-2c2e-43da-9fa9-69edfcd77830" volumeName="kubernetes.io/secret/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-cloud-controller-manager-operator-tls" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707669 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bef948b9-eef4-404b-9b49-6e4a2ceea73b" volumeName="kubernetes.io/configmap/bef948b9-eef4-404b-9b49-6e4a2ceea73b-images" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707684 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fdd2f1fd-1a94-4f4e-a275-b075f432f763" volumeName="kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cni-binary-copy" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707699 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15798f4d-8bcc-4e24-bb18-8dff1f4edf59" volumeName="kubernetes.io/projected/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-api-access-m2mwd" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707716 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57affd8b-d1ce-40d2-b31e-7b18645ca7b6" volumeName="kubernetes.io/configmap/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-env-overrides" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707730 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="680006ef-a955-491e-b6a3-1ca7fcc20165" volumeName="kubernetes.io/secret/680006ef-a955-491e-b6a3-1ca7fcc20165-signing-key" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707745 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87381a51-96e6-4e86-bdae-c8ac3fc7a039" volumeName="kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707764 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="50a2c23f-26af-4c7f-8ea6-996bcfe173d0" volumeName="kubernetes.io/projected/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-kube-api-access-2jcqf" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707783 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87381a51-96e6-4e86-bdae-c8ac3fc7a039" volumeName="kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-server-tls" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707799 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fdd2f1fd-1a94-4f4e-a275-b075f432f763" volumeName="kubernetes.io/projected/fdd2f1fd-1a94-4f4e-a275-b075f432f763-kube-api-access-fqfdm" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707814 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14489ef7-8df3-4a3b-a137-3a78e89d425b" volumeName="kubernetes.io/projected/14489ef7-8df3-4a3b-a137-3a78e89d425b-kube-api-access-n76wp" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707829 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81eefe1b-f683-4740-8fb0-0a5050f9b4a4" volumeName="kubernetes.io/secret/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-serving-cert" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707847 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="93cb5ef1-e8f1-4d11-8c93-1abf24626176" volumeName="kubernetes.io/secret/93cb5ef1-e8f1-4d11-8c93-1abf24626176-default-certificate" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707863 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2fcd92f-0a58-4c87-8213-715453486aca" volumeName="kubernetes.io/projected/f2fcd92f-0a58-4c87-8213-715453486aca-kube-api-access-zwnvl" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707877 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09269324-c908-474d-818f-5cd49406f1e2" volumeName="kubernetes.io/configmap/09269324-c908-474d-818f-5cd49406f1e2-telemetry-config" seLinuxMountContext="" Mar 18 09:03:32.707898 master-0 kubenswrapper[26053]: I0318 09:03:32.707891 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c56e1ac-8752-4e46-8692-93716087f0e0" volumeName="kubernetes.io/configmap/6c56e1ac-8752-4e46-8692-93716087f0e0-trusted-ca" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.707906 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1f2b373-0c85-4028-9089-9e9dff5d37b5" volumeName="kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-etcd-serving-ca" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708102 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b2588f5c-327c-49cc-8cfb-0cce1ad758d5" volumeName="kubernetes.io/projected/b2588f5c-327c-49cc-8cfb-0cce1ad758d5-kube-api-access-9mkcq" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708120 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cdcd27a4-6d46-47af-a14a-65f6501c10f0" volumeName="kubernetes.io/secret/cdcd27a4-6d46-47af-a14a-65f6501c10f0-machine-approver-tls" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708137 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25781967-12ce-490e-94aa-9b9722f495da" volumeName="kubernetes.io/projected/25781967-12ce-490e-94aa-9b9722f495da-kube-api-access-z5cgw" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708153 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3898c28b-69b0-46af-b085-37e12d7d80ba" volumeName="kubernetes.io/projected/3898c28b-69b0-46af-b085-37e12d7d80ba-kube-api-access-z98qs" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708202 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b7ac7ef-060f-45d2-8988-006d45402e00" volumeName="kubernetes.io/secret/7b7ac7ef-060f-45d2-8988-006d45402e00-serving-cert" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708222 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="800297fe-77fd-4f58-ade2-32a147cd7d5c" volumeName="kubernetes.io/projected/800297fe-77fd-4f58-ade2-32a147cd7d5c-ca-certs" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708238 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65cff83a-8d8f-4e4f-96ef-99941c29ba53" volumeName="kubernetes.io/projected/65cff83a-8d8f-4e4f-96ef-99941c29ba53-kube-api-access" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708255 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd" volumeName="kubernetes.io/configmap/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-config" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708270 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e88b021c-c810-4a68-aa48-d8666b52330e" volumeName="kubernetes.io/projected/e88b021c-c810-4a68-aa48-d8666b52330e-kube-api-access-k22wv" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708285 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="599418d3-6afa-46ab-9afa-659134f7ac94" volumeName="kubernetes.io/empty-dir/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-textfile" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708300 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a1f2b373-0c85-4028-9089-9e9dff5d37b5" volumeName="kubernetes.io/projected/a1f2b373-0c85-4028-9089-9e9dff5d37b5-kube-api-access-lczj8" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708315 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15b6612f-3a51-4a67-a566-8c520f85c6c2" volumeName="kubernetes.io/configmap/15b6612f-3a51-4a67-a566-8c520f85c6c2-trusted-ca-bundle" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708328 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f918d08d-df7c-4e8d-85ba-1c92d766db16" volumeName="kubernetes.io/configmap/f918d08d-df7c-4e8d-85ba-1c92d766db16-service-ca-bundle" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708344 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1deb139f-1903-417e-835c-28abdd156cdb" volumeName="kubernetes.io/projected/1deb139f-1903-417e-835c-28abdd156cdb-kube-api-access-dkmb4" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708359 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf5fd4cc-959e-4878-82e9-b0f90dba6553" volumeName="kubernetes.io/empty-dir/bf5fd4cc-959e-4878-82e9-b0f90dba6553-catalog-content" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708373 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fdd2f1fd-1a94-4f4e-a275-b075f432f763" volumeName="kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cni-sysctl-allowlist" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708387 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15b6612f-3a51-4a67-a566-8c520f85c6c2" volumeName="kubernetes.io/projected/15b6612f-3a51-4a67-a566-8c520f85c6c2-kube-api-access-dcfrf" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708400 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac" volumeName="kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708414 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="599418d3-6afa-46ab-9afa-659134f7ac94" volumeName="kubernetes.io/configmap/599418d3-6afa-46ab-9afa-659134f7ac94-metrics-client-ca" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708428 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5f827195-f68d-4bd2-865b-a1f041a5c73e" volumeName="kubernetes.io/secret/5f827195-f68d-4bd2-865b-a1f041a5c73e-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708485 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87381a51-96e6-4e86-bdae-c8ac3fc7a039" volumeName="kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-client-certs" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708503 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bef948b9-eef4-404b-9b49-6e4a2ceea73b" volumeName="kubernetes.io/projected/bef948b9-eef4-404b-9b49-6e4a2ceea73b-kube-api-access-mnn98" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708520 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f9ba06c-7a6b-4f46-a747-80b0a0b58600" volumeName="kubernetes.io/projected/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-kube-api-access" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708534 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f9ba06c-7a6b-4f46-a747-80b0a0b58600" volumeName="kubernetes.io/secret/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-serving-cert" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708581 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1deb139f-1903-417e-835c-28abdd156cdb" volumeName="kubernetes.io/configmap/1deb139f-1903-417e-835c-28abdd156cdb-trusted-ca" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708602 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1deb139f-1903-417e-835c-28abdd156cdb" volumeName="kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708620 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7205eeb-912b-4c31-b08f-ed0b2a1319aa" volumeName="kubernetes.io/secret/d7205eeb-912b-4c31-b08f-ed0b2a1319aa-proxy-tls" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708637 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e86268c9-7a83-4ccb-979a-feff00cb4b3e" volumeName="kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-service-ca-bundle" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708655 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb8f3615-9e89-4b51-87a2-7d168c81adf3" volumeName="kubernetes.io/configmap/eb8f3615-9e89-4b51-87a2-7d168c81adf3-images" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708672 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb8f3615-9e89-4b51-87a2-7d168c81adf3" volumeName="kubernetes.io/secret/eb8f3615-9e89-4b51-87a2-7d168c81adf3-cert" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708689 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e86268c9-7a83-4ccb-979a-feff00cb4b3e" volumeName="kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-trusted-ca-bundle" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708705 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3898c28b-69b0-46af-b085-37e12d7d80ba" volumeName="kubernetes.io/secret/3898c28b-69b0-46af-b085-37e12d7d80ba-samples-operator-tls" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708725 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57affd8b-d1ce-40d2-b31e-7b18645ca7b6" volumeName="kubernetes.io/projected/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-kube-api-access-fnzhn" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708741 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6c56e1ac-8752-4e46-8692-93716087f0e0" volumeName="kubernetes.io/projected/6c56e1ac-8752-4e46-8692-93716087f0e0-bound-sa-token" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708757 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="95143c61-6f91-4cd4-9411-31c2fb75d4d0" volumeName="kubernetes.io/secret/95143c61-6f91-4cd4-9411-31c2fb75d4d0-serving-cert" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708772 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8b779ce3-07c4-45ca-b1ca-750c95ed3d0b" volumeName="kubernetes.io/secret/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-metrics-tls" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708787 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="995ec82c-b593-416a-9287-6020a484855c" volumeName="kubernetes.io/empty-dir/995ec82c-b593-416a-9287-6020a484855c-utilities" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708802 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5e43736-33c3-4949-98ca-971332541d64" volumeName="kubernetes.io/projected/c5e43736-33c3-4949-98ca-971332541d64-kube-api-access-sqjsq" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708818 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eb8f3615-9e89-4b51-87a2-7d168c81adf3" volumeName="kubernetes.io/configmap/eb8f3615-9e89-4b51-87a2-7d168c81adf3-config" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708832 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fdd2f1fd-1a94-4f4e-a275-b075f432f763" volumeName="kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-whereabouts-flatfile-configmap" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708847 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15798f4d-8bcc-4e24-bb18-8dff1f4edf59" volumeName="kubernetes.io/secret/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-tls" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708863 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b59dbf5-0a61-4981-aed3-e73550615c4a" volumeName="kubernetes.io/projected/2b59dbf5-0a61-4981-aed3-e73550615c4a-kube-api-access-nqgbr" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708877 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57affd8b-d1ce-40d2-b31e-7b18645ca7b6" volumeName="kubernetes.io/secret/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-webhook-cert" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708891 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87381a51-96e6-4e86-bdae-c8ac3fc7a039" volumeName="kubernetes.io/projected/87381a51-96e6-4e86-bdae-c8ac3fc7a039-kube-api-access-brzfx" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708907 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0cd1cf7-be6f-4baf-8761-69c693476de9" volumeName="kubernetes.io/configmap/a0cd1cf7-be6f-4baf-8761-69c693476de9-cco-trusted-ca" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708924 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf7a3329-a04c-4b58-9364-b907c00cbe08" volumeName="kubernetes.io/projected/bf7a3329-a04c-4b58-9364-b907c00cbe08-bound-sa-token" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708941 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca9d4694-8675-47c5-819f-89bba9dcdc0f" volumeName="kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708956 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15798f4d-8bcc-4e24-bb18-8dff1f4edf59" volumeName="kubernetes.io/configmap/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708971 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="15b6612f-3a51-4a67-a566-8c520f85c6c2" volumeName="kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-etcd-client" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708986 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b1447b-1659-405b-81e0-21f0cf3e7a2c" volumeName="kubernetes.io/projected/17b1447b-1659-405b-81e0-21f0cf3e7a2c-kube-api-access-rd8zs" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.708999 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="411d544f-e105-44f0-927a-f61406b3f070" volumeName="kubernetes.io/secret/411d544f-e105-44f0-927a-f61406b3f070-catalogserver-certs" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.709014 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2fcd92f-0a58-4c87-8213-715453486aca" volumeName="kubernetes.io/empty-dir/f2fcd92f-0a58-4c87-8213-715453486aca-catalog-content" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.709030 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="599418d3-6afa-46ab-9afa-659134f7ac94" volumeName="kubernetes.io/projected/599418d3-6afa-46ab-9afa-659134f7ac94-kube-api-access-774fx" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.709045 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="599418d3-6afa-46ab-9afa-659134f7ac94" volumeName="kubernetes.io/secret/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.709060 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5f827195-f68d-4bd2-865b-a1f041a5c73e" volumeName="kubernetes.io/empty-dir/5f827195-f68d-4bd2-865b-a1f041a5c73e-operand-assets" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.709075 26053 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="800297fe-77fd-4f58-ade2-32a147cd7d5c" volumeName="kubernetes.io/projected/800297fe-77fd-4f58-ade2-32a147cd7d5c-kube-api-access-tw5zj" seLinuxMountContext="" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.709088 26053 reconstruct.go:97] "Volume reconstruction finished" Mar 18 09:03:32.709481 master-0 kubenswrapper[26053]: I0318 09:03:32.709098 26053 reconciler.go:26] "Reconciler: start to sync state" Mar 18 09:03:32.722534 master-0 kubenswrapper[26053]: I0318 09:03:32.722460 26053 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 18 09:03:32.728061 master-0 kubenswrapper[26053]: I0318 09:03:32.728014 26053 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 18 09:03:32.728154 master-0 kubenswrapper[26053]: I0318 09:03:32.728073 26053 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 18 09:03:32.728154 master-0 kubenswrapper[26053]: I0318 09:03:32.728124 26053 kubelet.go:2335] "Starting kubelet main sync loop" Mar 18 09:03:32.728229 master-0 kubenswrapper[26053]: E0318 09:03:32.728177 26053 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 18 09:03:32.741913 master-0 kubenswrapper[26053]: I0318 09:03:32.741862 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mcd6d_eb8f3615-9e89-4b51-87a2-7d168c81adf3/cluster-baremetal-operator/2.log" Mar 18 09:03:32.742289 master-0 kubenswrapper[26053]: I0318 09:03:32.742233 26053 generic.go:334] "Generic (PLEG): container finished" podID="eb8f3615-9e89-4b51-87a2-7d168c81adf3" containerID="968ae8479a0331117d0f148ecc19dfe89ce58e4b9ba1088bdc7b07d7a970e857" exitCode=1 Mar 18 09:03:32.772056 master-0 kubenswrapper[26053]: I0318 09:03:32.771423 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-tx2pv_e88b021c-c810-4a68-aa48-d8666b52330e/cluster-autoscaler-operator/0.log" Mar 18 09:03:32.772056 master-0 kubenswrapper[26053]: I0318 09:03:32.771904 26053 generic.go:334] "Generic (PLEG): container finished" podID="e88b021c-c810-4a68-aa48-d8666b52330e" containerID="191e1385839aadfcf8fad00f70dd0c37383e76893667c6d202209b39b27d4f57" exitCode=255 Mar 18 09:03:32.775307 master-0 kubenswrapper[26053]: I0318 09:03:32.775259 26053 generic.go:334] "Generic (PLEG): container finished" podID="f2fcd92f-0a58-4c87-8213-715453486aca" containerID="9ac32046c5add06c7112266ce422d6cd5a84efecd46bf95a0b99b1364bf42c11" exitCode=0 Mar 18 09:03:32.775307 master-0 kubenswrapper[26053]: I0318 09:03:32.775304 26053 generic.go:334] "Generic (PLEG): container finished" podID="f2fcd92f-0a58-4c87-8213-715453486aca" containerID="e45b21057937437a963f15e3caed2257e18f92ac6c2b138e44af253b2ed1f746" exitCode=0 Mar 18 09:03:32.778172 master-0 kubenswrapper[26053]: I0318 09:03:32.778129 26053 generic.go:334] "Generic (PLEG): container finished" podID="6c56e1ac-8752-4e46-8692-93716087f0e0" containerID="e78bbb854e3d9943cb3fa89e45e1e19c6f32f1732fab0adc69b2c8517be93fa3" exitCode=0 Mar 18 09:03:32.782367 master-0 kubenswrapper[26053]: I0318 09:03:32.782323 26053 generic.go:334] "Generic (PLEG): container finished" podID="fdd2f1fd-1a94-4f4e-a275-b075f432f763" containerID="8bbcbb7729919ddcb0aaf177e6b7da70bdb956a0c249d6fd8ccdc6cd23b74071" exitCode=0 Mar 18 09:03:32.782367 master-0 kubenswrapper[26053]: I0318 09:03:32.782363 26053 generic.go:334] "Generic (PLEG): container finished" podID="fdd2f1fd-1a94-4f4e-a275-b075f432f763" containerID="332c9bf8c34c932234aed0104fb033cece220b16a730251a8ed2dddb4807fbb9" exitCode=0 Mar 18 09:03:32.782367 master-0 kubenswrapper[26053]: I0318 09:03:32.782372 26053 generic.go:334] "Generic (PLEG): container finished" podID="fdd2f1fd-1a94-4f4e-a275-b075f432f763" containerID="29cb6a70b4f03bbaa88bb2a9cd200f77d44062bf7d6a056e592a38539d450a65" exitCode=0 Mar 18 09:03:32.782367 master-0 kubenswrapper[26053]: I0318 09:03:32.782380 26053 generic.go:334] "Generic (PLEG): container finished" podID="fdd2f1fd-1a94-4f4e-a275-b075f432f763" containerID="18607609fc2c048f02839d5d864c5753901b636e45e41dd655403f7b6b802044" exitCode=0 Mar 18 09:03:32.782367 master-0 kubenswrapper[26053]: I0318 09:03:32.782387 26053 generic.go:334] "Generic (PLEG): container finished" podID="fdd2f1fd-1a94-4f4e-a275-b075f432f763" containerID="8e721654d7a6dd53ba602bb38e73e10bda4fb74bd83575e72d850a92e1f3620b" exitCode=0 Mar 18 09:03:32.782690 master-0 kubenswrapper[26053]: I0318 09:03:32.782407 26053 generic.go:334] "Generic (PLEG): container finished" podID="fdd2f1fd-1a94-4f4e-a275-b075f432f763" containerID="4d7c904f1acd55b9d920d547c73d752e1d361d2495697dc27fa3307ea6bf7119" exitCode=0 Mar 18 09:03:32.785937 master-0 kubenswrapper[26053]: I0318 09:03:32.785906 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-k6xp5_2d0da6e3-3887-4361-8eae-e7447f9ff72c/package-server-manager/0.log" Mar 18 09:03:32.786263 master-0 kubenswrapper[26053]: I0318 09:03:32.786227 26053 generic.go:334] "Generic (PLEG): container finished" podID="2d0da6e3-3887-4361-8eae-e7447f9ff72c" containerID="eff8515f7824ab4366b3686f83336181d1ef884da04bbecf12f9008db8dde14c" exitCode=1 Mar 18 09:03:32.786605 master-0 kubenswrapper[26053]: E0318 09:03:32.786582 26053 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:03:32.788448 master-0 kubenswrapper[26053]: I0318 09:03:32.788416 26053 generic.go:334] "Generic (PLEG): container finished" podID="be2682e4-cb63-4102-a83e-ef28023e273a" containerID="e0c10cb728f84836bdf3fdacd9f7ace9b139b03a5e08557846d8eceff033db2d" exitCode=0 Mar 18 09:03:32.793611 master-0 kubenswrapper[26053]: I0318 09:03:32.793546 26053 generic.go:334] "Generic (PLEG): container finished" podID="bf5fd4cc-959e-4878-82e9-b0f90dba6553" containerID="5a0a55d40814df39dc638c32e9fe75e6b627c413e28b2b6c92eeb933e420f49c" exitCode=0 Mar 18 09:03:32.793611 master-0 kubenswrapper[26053]: I0318 09:03:32.793602 26053 generic.go:334] "Generic (PLEG): container finished" podID="bf5fd4cc-959e-4878-82e9-b0f90dba6553" containerID="a4600607ede35bdc684e56df1e32d786c4e72f5ab0392ea420b4029975f14ee2" exitCode=0 Mar 18 09:03:32.795669 master-0 kubenswrapper[26053]: I0318 09:03:32.795635 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-2g6x9_0f6a7f55-84bd-4ea5-8248-4cb565904c3b/openshift-controller-manager-operator/0.log" Mar 18 09:03:32.795764 master-0 kubenswrapper[26053]: I0318 09:03:32.795671 26053 generic.go:334] "Generic (PLEG): container finished" podID="0f6a7f55-84bd-4ea5-8248-4cb565904c3b" containerID="66cbf701fabf0e0f193e14614de147bfd5b674f1f5978178edd97cd8b89c12a4" exitCode=1 Mar 18 09:03:32.800937 master-0 kubenswrapper[26053]: I0318 09:03:32.800894 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_38b830ff-8938-4f21-8977-c29a19c85afb/installer/0.log" Mar 18 09:03:32.801049 master-0 kubenswrapper[26053]: I0318 09:03:32.800942 26053 generic.go:334] "Generic (PLEG): container finished" podID="38b830ff-8938-4f21-8977-c29a19c85afb" containerID="b28f4dc9cd44e68014d536f9ea9c8387108c84bc538f43d2e6bb244d9d074b11" exitCode=1 Mar 18 09:03:32.805110 master-0 kubenswrapper[26053]: I0318 09:03:32.805076 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-4cxfh_bf7a3329-a04c-4b58-9364-b907c00cbe08/ingress-operator/4.log" Mar 18 09:03:32.806163 master-0 kubenswrapper[26053]: I0318 09:03:32.806131 26053 generic.go:334] "Generic (PLEG): container finished" podID="bf7a3329-a04c-4b58-9364-b907c00cbe08" containerID="0914d593ef977819ab7dc6ab7b6e7409b6b30b0704e79df20d36fbbb266a5b50" exitCode=1 Mar 18 09:03:32.816335 master-0 kubenswrapper[26053]: I0318 09:03:32.816228 26053 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="9cb189c47185ee7666cdc7e6aa936134fd95f8598c903e678c39284b0494bcba" exitCode=0 Mar 18 09:03:32.816527 master-0 kubenswrapper[26053]: I0318 09:03:32.816510 26053 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="c26eb3bf03b5fe4ebeece6b8722b565a3875e9cd3bc4e444bee1b43372467a32" exitCode=0 Mar 18 09:03:32.816617 master-0 kubenswrapper[26053]: I0318 09:03:32.816602 26053 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="e9c6441b6451eb8d4f18b81edc159711a0094c083c79128b3e30069808890f14" exitCode=0 Mar 18 09:03:32.825202 master-0 kubenswrapper[26053]: I0318 09:03:32.825146 26053 generic.go:334] "Generic (PLEG): container finished" podID="6e869b45-8ca6-485f-8b6f-b2fad3b02efe" containerID="c6f50cc1d4d03038974b1dda9ac09a73d320c83ee3b7473d607c5ae6d14d9d74" exitCode=0 Mar 18 09:03:32.828263 master-0 kubenswrapper[26053]: E0318 09:03:32.828226 26053 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 09:03:32.828519 master-0 kubenswrapper[26053]: I0318 09:03:32.828494 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-r4mv6_cdcd27a4-6d46-47af-a14a-65f6501c10f0/machine-approver-controller/0.log" Mar 18 09:03:32.828983 master-0 kubenswrapper[26053]: I0318 09:03:32.828954 26053 generic.go:334] "Generic (PLEG): container finished" podID="cdcd27a4-6d46-47af-a14a-65f6501c10f0" containerID="ca74e483ee5f7795ddd4a19b8dedb0099339c33aeba4c489fb33f3fdb2d038a6" exitCode=255 Mar 18 09:03:32.831930 master-0 kubenswrapper[26053]: I0318 09:03:32.831882 26053 generic.go:334] "Generic (PLEG): container finished" podID="15b6612f-3a51-4a67-a566-8c520f85c6c2" containerID="ff18d78705a1faf4db66557634d82d49694b96e1033b13b70bf5dd3176027008" exitCode=0 Mar 18 09:03:32.835033 master-0 kubenswrapper[26053]: I0318 09:03:32.834973 26053 generic.go:334] "Generic (PLEG): container finished" podID="c393a935-1821-4742-b1bb-0ee52ada5434" containerID="82098974401c2078cdae0b9cda75b7a09e79d037d34e1919901dd8a75694e9fb" exitCode=0 Mar 18 09:03:32.837646 master-0 kubenswrapper[26053]: I0318 09:03:32.837617 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-m8p9p_81eefe1b-f683-4740-8fb0-0a5050f9b4a4/openshift-apiserver-operator/1.log" Mar 18 09:03:32.837787 master-0 kubenswrapper[26053]: I0318 09:03:32.837654 26053 generic.go:334] "Generic (PLEG): container finished" podID="81eefe1b-f683-4740-8fb0-0a5050f9b4a4" containerID="f271faf0d7c55de8efcccdde7688825092dfb7f1d00e1599288466a5a990a816" exitCode=255 Mar 18 09:03:32.841159 master-0 kubenswrapper[26053]: I0318 09:03:32.841118 26053 generic.go:334] "Generic (PLEG): container finished" podID="a1f2b373-0c85-4028-9089-9e9dff5d37b5" containerID="1820c7b891866f2da2386244d406850e2ca41824fea9e45fc4a61e84270cbb14" exitCode=0 Mar 18 09:03:32.844940 master-0 kubenswrapper[26053]: I0318 09:03:32.844863 26053 generic.go:334] "Generic (PLEG): container finished" podID="680006ef-a955-491e-b6a3-1ca7fcc20165" containerID="f668ca32df6831c1852bfec6ac04b2b91b947fda7bf3560ef4ffe10748867750" exitCode=0 Mar 18 09:03:32.886717 master-0 kubenswrapper[26053]: E0318 09:03:32.886682 26053 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:03:32.889417 master-0 kubenswrapper[26053]: I0318 09:03:32.889391 26053 generic.go:334] "Generic (PLEG): container finished" podID="65cff83a-8d8f-4e4f-96ef-99941c29ba53" containerID="26f8c4214ea54fb5e2ff7d9fa93e91ddc6301a4725fdb41f15e4fe0ec185b735" exitCode=0 Mar 18 09:03:32.908694 master-0 kubenswrapper[26053]: I0318 09:03:32.908654 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-lf7kq_57affd8b-d1ce-40d2-b31e-7b18645ca7b6/approver/1.log" Mar 18 09:03:32.911267 master-0 kubenswrapper[26053]: I0318 09:03:32.911231 26053 generic.go:334] "Generic (PLEG): container finished" podID="57affd8b-d1ce-40d2-b31e-7b18645ca7b6" containerID="8adfaf98ac3f7666cf99c8210bf62f09cc200963ab9628e3f3b8887a2ea80d44" exitCode=1 Mar 18 09:03:32.917116 master-0 kubenswrapper[26053]: I0318 09:03:32.917082 26053 generic.go:334] "Generic (PLEG): container finished" podID="c5c995cf-40a0-4cd6-87fa-96a522f7bc57" containerID="f746e038f97898d00b98367b1de674491c64f30a9f70b4c41c7083bf263f99b2" exitCode=0 Mar 18 09:03:32.920281 master-0 kubenswrapper[26053]: I0318 09:03:32.920251 26053 generic.go:334] "Generic (PLEG): container finished" podID="2a864188-ada6-4ec2-bf9f-72dab210f0ce" containerID="0dee431f1bab8eafebe24c7c7116af4c82f57849d3fa9f78e391b177e72f8116" exitCode=0 Mar 18 09:03:32.921684 master-0 kubenswrapper[26053]: I0318 09:03:32.921645 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qnc62_4e919445-81d0-4663-8941-f596d8121305/snapshot-controller/4.log" Mar 18 09:03:32.921769 master-0 kubenswrapper[26053]: I0318 09:03:32.921690 26053 generic.go:334] "Generic (PLEG): container finished" podID="4e919445-81d0-4663-8941-f596d8121305" containerID="97b6b0922d17ce30a0b9e74a3e377338947d2ced4f3ea98ad7676d4078ee6fa4" exitCode=1 Mar 18 09:03:32.925924 master-0 kubenswrapper[26053]: I0318 09:03:32.925879 26053 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="6d079fc624d85c119aa8e55be99b13072fefe4d01c61b51b27950cfbdef8830f" exitCode=1 Mar 18 09:03:32.937663 master-0 kubenswrapper[26053]: I0318 09:03:32.937532 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-j75sc_e86268c9-7a83-4ccb-979a-feff00cb4b3e/authentication-operator/1.log" Mar 18 09:03:32.937972 master-0 kubenswrapper[26053]: I0318 09:03:32.937860 26053 generic.go:334] "Generic (PLEG): container finished" podID="e86268c9-7a83-4ccb-979a-feff00cb4b3e" containerID="9c9d46ecc19961b32a9a632092c439cef6feaecffc62b43586ab2e3093d0896c" exitCode=255 Mar 18 09:03:32.940100 master-0 kubenswrapper[26053]: I0318 09:03:32.940048 26053 generic.go:334] "Generic (PLEG): container finished" podID="ca9d4694-8675-47c5-819f-89bba9dcdc0f" containerID="c88fcd910d6e8db24ed27b15176e93cabbfee77fff73e20a53806a79c06e2fd5" exitCode=0 Mar 18 09:03:32.941587 master-0 kubenswrapper[26053]: I0318 09:03:32.941537 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-cpbdr_0f9ba06c-7a6b-4f46-a747-80b0a0b58600/kube-scheduler-operator-container/1.log" Mar 18 09:03:32.941648 master-0 kubenswrapper[26053]: I0318 09:03:32.941604 26053 generic.go:334] "Generic (PLEG): container finished" podID="0f9ba06c-7a6b-4f46-a747-80b0a0b58600" containerID="2cfc620769df1869217ef2bafc4fb4d7ac92515611935bd9cfb8d767d6392d6b" exitCode=255 Mar 18 09:03:32.945050 master-0 kubenswrapper[26053]: I0318 09:03:32.945025 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 09:03:32.945760 master-0 kubenswrapper[26053]: I0318 09:03:32.945733 26053 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="99b2a12b7f88eda209977be842c5d486304d7932fa91b2448c0ff3f2bd17f526" exitCode=1 Mar 18 09:03:32.945760 master-0 kubenswrapper[26053]: I0318 09:03:32.945757 26053 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="d5fdea15855020c7a6ace295d323d168cc8f0fab3f1b0678b2b4dd54d4267ce4" exitCode=0 Mar 18 09:03:32.956406 master-0 kubenswrapper[26053]: I0318 09:03:32.955999 26053 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="3251669fc25c1285249f7f12096305de8608ba905c5c140d1ecff87834122e35" exitCode=2 Mar 18 09:03:32.956406 master-0 kubenswrapper[26053]: I0318 09:03:32.956050 26053 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="6ee1233c71c5af7063fbaa44082f3735eb10f8e872e4942be3116550d1869f80" exitCode=0 Mar 18 09:03:32.987437 master-0 kubenswrapper[26053]: E0318 09:03:32.987384 26053 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:03:32.999540 master-0 kubenswrapper[26053]: I0318 09:03:32.995907 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vwqc4_94e2a8f0-2c2e-43da-9fa9-69edfcd77830/config-sync-controllers/0.log" Mar 18 09:03:32.999540 master-0 kubenswrapper[26053]: I0318 09:03:32.996605 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vwqc4_94e2a8f0-2c2e-43da-9fa9-69edfcd77830/cluster-cloud-controller-manager/0.log" Mar 18 09:03:32.999540 master-0 kubenswrapper[26053]: I0318 09:03:32.996647 26053 generic.go:334] "Generic (PLEG): container finished" podID="94e2a8f0-2c2e-43da-9fa9-69edfcd77830" containerID="096ac353f933435e5c018fb15b66b68ffb3a1e47071e3f93549e3c9af4316fb4" exitCode=1 Mar 18 09:03:32.999540 master-0 kubenswrapper[26053]: I0318 09:03:32.996670 26053 generic.go:334] "Generic (PLEG): container finished" podID="94e2a8f0-2c2e-43da-9fa9-69edfcd77830" containerID="77222f1857306a427ed0136d01e66abea08222205dcb9a92415c3629bd81b945" exitCode=1 Mar 18 09:03:32.999805 master-0 kubenswrapper[26053]: I0318 09:03:32.999737 26053 generic.go:334] "Generic (PLEG): container finished" podID="5f827195-f68d-4bd2-865b-a1f041a5c73e" containerID="94a4ad92cd3b53ae4641e35e7fd4ec8fccd8630c21c0fc3c12a574e02645e3da" exitCode=0 Mar 18 09:03:32.999805 master-0 kubenswrapper[26053]: I0318 09:03:32.999752 26053 generic.go:334] "Generic (PLEG): container finished" podID="5f827195-f68d-4bd2-865b-a1f041a5c73e" containerID="a380f5739da0f5e27b1b8f3bd34b12b88446dd93b791869bfaf36182d6421c5b" exitCode=0 Mar 18 09:03:32.999805 master-0 kubenswrapper[26053]: I0318 09:03:32.999759 26053 generic.go:334] "Generic (PLEG): container finished" podID="5f827195-f68d-4bd2-865b-a1f041a5c73e" containerID="4dce06688697b9f6ea7f2ce75cbd0f8dd6b27c169d4036f6f09223ce6b7ed156" exitCode=0 Mar 18 09:03:33.001586 master-0 kubenswrapper[26053]: I0318 09:03:33.001191 26053 generic.go:334] "Generic (PLEG): container finished" podID="1c322813-b574-4b46-b760-208ccecd01a5" containerID="dae73ee3ae724b2c21523292592ef38e39e0a433287c5f3b59839f74c5990e24" exitCode=0 Mar 18 09:03:33.001586 master-0 kubenswrapper[26053]: I0318 09:03:33.001211 26053 generic.go:334] "Generic (PLEG): container finished" podID="1c322813-b574-4b46-b760-208ccecd01a5" containerID="7403c7f38da67ef9a4e6e3661a1a27ddfd26ac674591d0d6ae38450cf6903ac0" exitCode=0 Mar 18 09:03:33.014212 master-0 kubenswrapper[26053]: I0318 09:03:33.012741 26053 generic.go:334] "Generic (PLEG): container finished" podID="995ec82c-b593-416a-9287-6020a484855c" containerID="6158208c344c114482182b4073df205ae1396e550c8ee72baa6c0932a13e4a44" exitCode=0 Mar 18 09:03:33.014212 master-0 kubenswrapper[26053]: I0318 09:03:33.012781 26053 generic.go:334] "Generic (PLEG): container finished" podID="995ec82c-b593-416a-9287-6020a484855c" containerID="ee6511dee404aac71ff58b974ad8491dbd1ce0b8a6ad263b0d8e251dc9d1b943" exitCode=0 Mar 18 09:03:33.036584 master-0 kubenswrapper[26053]: E0318 09:03:33.034921 26053 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 09:03:33.054300 master-0 kubenswrapper[26053]: I0318 09:03:33.054265 26053 generic.go:334] "Generic (PLEG): container finished" podID="d7205eeb-912b-4c31-b08f-ed0b2a1319aa" containerID="50fd77676f2fb32890abad0222ed7ebdb08546cdf39f1ddb90ccc00d539b7f06" exitCode=0 Mar 18 09:03:33.103084 master-0 kubenswrapper[26053]: E0318 09:03:33.102315 26053 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:03:33.116585 master-0 kubenswrapper[26053]: I0318 09:03:33.108969 26053 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="5a3bd52bc46563d9e0f440951b976daa40dee6ea05c0ee56171ddc976c094e95" exitCode=0 Mar 18 09:03:33.116585 master-0 kubenswrapper[26053]: I0318 09:03:33.109004 26053 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="e66d51cf8147f2ef1dd8f8cd73d79140962d6bcce6a8aaa4c5456711dcd4f71a" exitCode=0 Mar 18 09:03:33.116585 master-0 kubenswrapper[26053]: I0318 09:03:33.109012 26053 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="c0902a4169e07c094c9a3b99e9ad46a44edb13e670f8fb3c264aac643fba743d" exitCode=0 Mar 18 09:03:33.162859 master-0 kubenswrapper[26053]: I0318 09:03:33.161686 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-xfqsm_800297fe-77fd-4f58-ade2-32a147cd7d5c/manager/1.log" Mar 18 09:03:33.162859 master-0 kubenswrapper[26053]: I0318 09:03:33.162240 26053 generic.go:334] "Generic (PLEG): container finished" podID="800297fe-77fd-4f58-ade2-32a147cd7d5c" containerID="9fa57acf7d89fed72b41cf833947aeeae5bc2aa09219f68d237536250d7030f8" exitCode=1 Mar 18 09:03:33.199006 master-0 kubenswrapper[26053]: I0318 09:03:33.198969 26053 generic.go:334] "Generic (PLEG): container finished" podID="7cac1300-44c1-4a7d-8d14-efa9702ad9df" containerID="9e7634be3a4cb755dbc0dd2889d5ffa704ff67f015983aeee93833b324c107db" exitCode=0 Mar 18 09:03:33.206264 master-0 kubenswrapper[26053]: E0318 09:03:33.205622 26053 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:03:33.222238 master-0 kubenswrapper[26053]: I0318 09:03:33.219738 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_93298cb2-d669-49ea-92be-8891f07ab1c5/installer/0.log" Mar 18 09:03:33.222238 master-0 kubenswrapper[26053]: I0318 09:03:33.219779 26053 generic.go:334] "Generic (PLEG): container finished" podID="93298cb2-d669-49ea-92be-8891f07ab1c5" containerID="c0f26fec4f81ffb39062787c37d928b9983f9d92c91a3bd728d23e41e8ceecc3" exitCode=1 Mar 18 09:03:33.237441 master-0 kubenswrapper[26053]: I0318 09:03:33.237374 26053 generic.go:334] "Generic (PLEG): container finished" podID="93cb5ef1-e8f1-4d11-8c93-1abf24626176" containerID="dbc1cb6940e9efff07d651c65a18c59c674dd8bccc10c54e3755e80079c9084e" exitCode=0 Mar 18 09:03:33.251938 master-0 kubenswrapper[26053]: I0318 09:03:33.251895 26053 generic.go:334] "Generic (PLEG): container finished" podID="8dacdedc-c6ad-40d4-afdc-59a31be417fe" containerID="ef703157d612ad5a33aedc987f4c2c3909390ffd8d83083c1d4a577646a22e59" exitCode=0 Mar 18 09:03:33.257689 master-0 kubenswrapper[26053]: I0318 09:03:33.257660 26053 generic.go:334] "Generic (PLEG): container finished" podID="ac3507630eeeca1ec26dca5ed036e3bb" containerID="6334e1fed827ddd985e374f2cd49bc8670ca90ca11daa38cf82b7dd454965666" exitCode=0 Mar 18 09:03:33.260173 master-0 kubenswrapper[26053]: I0318 09:03:33.260141 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_5ca7b84e-0aff-4526-948a-03492712ff8f/installer/0.log" Mar 18 09:03:33.261142 master-0 kubenswrapper[26053]: I0318 09:03:33.261100 26053 generic.go:334] "Generic (PLEG): container finished" podID="5ca7b84e-0aff-4526-948a-03492712ff8f" containerID="20d4a123aac7008bd6bae1aff8407f2615166875d8bf7999da7a207bfc33acbf" exitCode=1 Mar 18 09:03:33.264482 master-0 kubenswrapper[26053]: I0318 09:03:33.264447 26053 generic.go:334] "Generic (PLEG): container finished" podID="599418d3-6afa-46ab-9afa-659134f7ac94" containerID="be9197abb6a4f7b0149993aa1f56516c44e239640ef2e0e8bd7924f48826c43c" exitCode=0 Mar 18 09:03:33.266079 master-0 kubenswrapper[26053]: I0318 09:03:33.266050 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-s98kp_25781967-12ce-490e-94aa-9b9722f495da/control-plane-machine-set-operator/0.log" Mar 18 09:03:33.266149 master-0 kubenswrapper[26053]: I0318 09:03:33.266111 26053 generic.go:334] "Generic (PLEG): container finished" podID="25781967-12ce-490e-94aa-9b9722f495da" containerID="49a79a26d80521d4a77ceb38753751818ca40b01df46c62b4c6e6cd03feb2aa4" exitCode=1 Mar 18 09:03:33.272641 master-0 kubenswrapper[26053]: I0318 09:03:33.272119 26053 generic.go:334] "Generic (PLEG): container finished" podID="c46fcf39-9167-4ec2-9d2c-0a622bc69d13" containerID="ee10cfeeb8c93ff8e40f81f0386b22a513e8b6ef1f61583ef7f0a572ddbf099a" exitCode=0 Mar 18 09:03:33.275048 master-0 kubenswrapper[26053]: I0318 09:03:33.275023 26053 generic.go:334] "Generic (PLEG): container finished" podID="bb6ef4c4-bff3-4559-8e42-582bbd668b7c" containerID="94a0ef05ccdfbfbab75ff3d50bbf9ce2c5410905e297dadef1700e3589016d40" exitCode=0 Mar 18 09:03:33.282626 master-0 kubenswrapper[26053]: I0318 09:03:33.277458 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-whh6r_95143c61-6f91-4cd4-9411-31c2fb75d4d0/openshift-config-operator/2.log" Mar 18 09:03:33.282626 master-0 kubenswrapper[26053]: I0318 09:03:33.277729 26053 generic.go:334] "Generic (PLEG): container finished" podID="95143c61-6f91-4cd4-9411-31c2fb75d4d0" containerID="e8cd059870c802ff3fdfa21cb82c57c0674dfa32ec84f5d0c29f5b8b3041ec4d" exitCode=255 Mar 18 09:03:33.282626 master-0 kubenswrapper[26053]: I0318 09:03:33.277743 26053 generic.go:334] "Generic (PLEG): container finished" podID="95143c61-6f91-4cd4-9411-31c2fb75d4d0" containerID="cca28a804f84553b8b1a53af19f79b42304859cf6bff54e57401c4419c4a7e40" exitCode=0 Mar 18 09:03:33.282626 master-0 kubenswrapper[26053]: I0318 09:03:33.278852 26053 generic.go:334] "Generic (PLEG): container finished" podID="0c9de07b-1ef1-4228-b310-1007d999dc7b" containerID="19dda705eb005970ec7faa939c9f315d05d7277d2869c2b15c7b89d228425457" exitCode=0 Mar 18 09:03:33.282626 master-0 kubenswrapper[26053]: I0318 09:03:33.279837 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_3253d87f-ae48-42cf-950f-f508a9b82d0d/installer/0.log" Mar 18 09:03:33.282626 master-0 kubenswrapper[26053]: I0318 09:03:33.279854 26053 generic.go:334] "Generic (PLEG): container finished" podID="3253d87f-ae48-42cf-950f-f508a9b82d0d" containerID="f4700f538c7d454f7c9d134fd47d7a5c2ce673d0b9bd02c96a2dfc730672550e" exitCode=1 Mar 18 09:03:33.282626 master-0 kubenswrapper[26053]: I0318 09:03:33.281054 26053 generic.go:334] "Generic (PLEG): container finished" podID="b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd" containerID="eca3cc2c6f8e3aeae9e8d1a0e8694ecad0c3c1ccd8351a14dff6726fb181ef90" exitCode=0 Mar 18 09:03:33.285916 master-0 kubenswrapper[26053]: I0318 09:03:33.283612 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-n4t2h_fdb52116-9c55-4464-99c8-fc2e4559996b/machine-api-operator/0.log" Mar 18 09:03:33.285916 master-0 kubenswrapper[26053]: I0318 09:03:33.283934 26053 generic.go:334] "Generic (PLEG): container finished" podID="fdb52116-9c55-4464-99c8-fc2e4559996b" containerID="bdeb3e204eeda9a4ca5f0b606295f7a8a8b0db7e2e36aab9adc87281923f44e9" exitCode=255 Mar 18 09:03:33.285916 master-0 kubenswrapper[26053]: I0318 09:03:33.285417 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-9s8lp_1deb139f-1903-417e-835c-28abdd156cdb/cluster-node-tuning-operator/0.log" Mar 18 09:03:33.285916 master-0 kubenswrapper[26053]: I0318 09:03:33.285470 26053 generic.go:334] "Generic (PLEG): container finished" podID="1deb139f-1903-417e-835c-28abdd156cdb" containerID="32b058c6d1ee238c753a849a50cae740263263767c61bf2151475052399455e0" exitCode=1 Mar 18 09:03:33.300758 master-0 kubenswrapper[26053]: I0318 09:03:33.300728 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-xlfrc_1df9560e-21f0-44fe-bb51-4bc0fde4a3ac/kube-controller-manager-operator/1.log" Mar 18 09:03:33.300915 master-0 kubenswrapper[26053]: I0318 09:03:33.300769 26053 generic.go:334] "Generic (PLEG): container finished" podID="1df9560e-21f0-44fe-bb51-4bc0fde4a3ac" containerID="33e0c0fa477ce3a082850936be336ae3c69e7dc9385f227bc893cfb947394012" exitCode=255 Mar 18 09:03:33.303121 master-0 kubenswrapper[26053]: I0318 09:03:33.302064 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_b75d3625-4131-465d-a8e2-4c42588c7630/installer/0.log" Mar 18 09:03:33.303121 master-0 kubenswrapper[26053]: I0318 09:03:33.302085 26053 generic.go:334] "Generic (PLEG): container finished" podID="b75d3625-4131-465d-a8e2-4c42588c7630" containerID="f10ab16270a7803054be2d271744f71e45d5e3fab77e472706ee3fb055b353ea" exitCode=1 Mar 18 09:03:33.305501 master-0 kubenswrapper[26053]: I0318 09:03:33.305483 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-vbxdw_411d544f-e105-44f0-927a-f61406b3f070/manager/1.log" Mar 18 09:03:33.305941 master-0 kubenswrapper[26053]: I0318 09:03:33.305814 26053 generic.go:334] "Generic (PLEG): container finished" podID="411d544f-e105-44f0-927a-f61406b3f070" containerID="c7cfa4dec96dbca2fe125b83f44d5acd8c41f552ae5f721e4aca31bd53b0ff70" exitCode=1 Mar 18 09:03:33.305941 master-0 kubenswrapper[26053]: E0318 09:03:33.305875 26053 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:03:33.309909 master-0 kubenswrapper[26053]: I0318 09:03:33.309884 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-6rtpx_8b779ce3-07c4-45ca-b1ca-750c95ed3d0b/network-operator/1.log" Mar 18 09:03:33.310004 master-0 kubenswrapper[26053]: I0318 09:03:33.309915 26053 generic.go:334] "Generic (PLEG): container finished" podID="8b779ce3-07c4-45ca-b1ca-750c95ed3d0b" containerID="88991e3930254d3b149944c85afc57bb3f7cc44aa37269c1606831ad4c12dd71" exitCode=255 Mar 18 09:03:33.311295 master-0 kubenswrapper[26053]: I0318 09:03:33.311273 26053 generic.go:334] "Generic (PLEG): container finished" podID="08e4bcfe-d6ca-4799-9431-682673fe7380" containerID="4fac56b4f00969e62c3497577a0e34f987859f3caade7772d5b6be1eaf234a7d" exitCode=0 Mar 18 09:03:33.320864 master-0 kubenswrapper[26053]: I0318 09:03:33.320824 26053 generic.go:334] "Generic (PLEG): container finished" podID="a0cd1cf7-be6f-4baf-8761-69c693476de9" containerID="99ea637f908899f3c91ea05ee2b0d7e3ac50162756d8cfe11cb446dfbb2129bd" exitCode=0 Mar 18 09:03:33.406073 master-0 kubenswrapper[26053]: E0318 09:03:33.405949 26053 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:03:33.435019 master-0 kubenswrapper[26053]: E0318 09:03:33.434989 26053 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 18 09:03:33.467395 master-0 kubenswrapper[26053]: I0318 09:03:33.467356 26053 manager.go:324] Recovery completed Mar 18 09:03:33.506140 master-0 kubenswrapper[26053]: E0318 09:03:33.506086 26053 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 18 09:03:33.560604 master-0 kubenswrapper[26053]: I0318 09:03:33.557016 26053 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:03:33.566174 master-0 kubenswrapper[26053]: I0318 09:03:33.563180 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:03:33.566174 master-0 kubenswrapper[26053]: I0318 09:03:33.563233 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:03:33.566174 master-0 kubenswrapper[26053]: I0318 09:03:33.563247 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:03:33.570384 master-0 kubenswrapper[26053]: I0318 09:03:33.570346 26053 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 18 09:03:33.570384 master-0 kubenswrapper[26053]: I0318 09:03:33.570372 26053 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 18 09:03:33.570488 master-0 kubenswrapper[26053]: I0318 09:03:33.570412 26053 state_mem.go:36] "Initialized new in-memory state store" Mar 18 09:03:33.571223 master-0 kubenswrapper[26053]: I0318 09:03:33.570661 26053 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 18 09:03:33.571223 master-0 kubenswrapper[26053]: I0318 09:03:33.570683 26053 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 18 09:03:33.571223 master-0 kubenswrapper[26053]: I0318 09:03:33.570707 26053 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 18 09:03:33.571223 master-0 kubenswrapper[26053]: I0318 09:03:33.570716 26053 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 18 09:03:33.571223 master-0 kubenswrapper[26053]: I0318 09:03:33.570725 26053 policy_none.go:49] "None policy: Start" Mar 18 09:03:33.574438 master-0 kubenswrapper[26053]: I0318 09:03:33.574397 26053 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 18 09:03:33.574438 master-0 kubenswrapper[26053]: I0318 09:03:33.574440 26053 state_mem.go:35] "Initializing new in-memory state store" Mar 18 09:03:33.574739 master-0 kubenswrapper[26053]: I0318 09:03:33.574692 26053 state_mem.go:75] "Updated machine memory state" Mar 18 09:03:33.574739 master-0 kubenswrapper[26053]: I0318 09:03:33.574708 26053 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 18 09:03:33.587165 master-0 kubenswrapper[26053]: I0318 09:03:33.587123 26053 manager.go:334] "Starting Device Plugin manager" Mar 18 09:03:33.587286 master-0 kubenswrapper[26053]: I0318 09:03:33.587169 26053 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 18 09:03:33.587286 master-0 kubenswrapper[26053]: I0318 09:03:33.587186 26053 server.go:79] "Starting device plugin registration server" Mar 18 09:03:33.587765 master-0 kubenswrapper[26053]: I0318 09:03:33.587724 26053 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 18 09:03:33.587841 master-0 kubenswrapper[26053]: I0318 09:03:33.587751 26053 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 18 09:03:33.588350 master-0 kubenswrapper[26053]: I0318 09:03:33.588311 26053 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 18 09:03:33.588436 master-0 kubenswrapper[26053]: I0318 09:03:33.588402 26053 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 18 09:03:33.588436 master-0 kubenswrapper[26053]: I0318 09:03:33.588412 26053 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 18 09:03:33.597358 master-0 kubenswrapper[26053]: E0318 09:03:33.597296 26053 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 18 09:03:33.697984 master-0 kubenswrapper[26053]: I0318 09:03:33.697639 26053 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:03:33.701079 master-0 kubenswrapper[26053]: I0318 09:03:33.701041 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:03:33.701079 master-0 kubenswrapper[26053]: I0318 09:03:33.701078 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:03:33.701196 master-0 kubenswrapper[26053]: I0318 09:03:33.701086 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:03:33.701196 master-0 kubenswrapper[26053]: I0318 09:03:33.701107 26053 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 18 09:03:34.235581 master-0 kubenswrapper[26053]: I0318 09:03:34.235478 26053 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 09:03:34.235775 master-0 kubenswrapper[26053]: I0318 09:03:34.235614 26053 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:03:34.237914 master-0 kubenswrapper[26053]: I0318 09:03:34.237878 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:03:34.237914 master-0 kubenswrapper[26053]: I0318 09:03:34.237911 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:03:34.238071 master-0 kubenswrapper[26053]: I0318 09:03:34.237919 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:03:34.238071 master-0 kubenswrapper[26053]: I0318 09:03:34.237990 26053 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:03:34.238219 master-0 kubenswrapper[26053]: I0318 09:03:34.238173 26053 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:03:34.240332 master-0 kubenswrapper[26053]: I0318 09:03:34.240306 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:03:34.240332 master-0 kubenswrapper[26053]: I0318 09:03:34.240328 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:03:34.240417 master-0 kubenswrapper[26053]: I0318 09:03:34.240336 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:03:34.240417 master-0 kubenswrapper[26053]: I0318 09:03:34.240385 26053 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:03:34.240556 master-0 kubenswrapper[26053]: I0318 09:03:34.240525 26053 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:03:34.241307 master-0 kubenswrapper[26053]: I0318 09:03:34.241269 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:03:34.241307 master-0 kubenswrapper[26053]: I0318 09:03:34.241309 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:03:34.241399 master-0 kubenswrapper[26053]: I0318 09:03:34.241317 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:03:34.245003 master-0 kubenswrapper[26053]: I0318 09:03:34.244946 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:03:34.245003 master-0 kubenswrapper[26053]: I0318 09:03:34.245004 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:03:34.245183 master-0 kubenswrapper[26053]: I0318 09:03:34.245016 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:03:34.245183 master-0 kubenswrapper[26053]: I0318 09:03:34.245100 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:03:34.245183 master-0 kubenswrapper[26053]: I0318 09:03:34.245135 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:03:34.245183 master-0 kubenswrapper[26053]: I0318 09:03:34.245148 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:03:34.245426 master-0 kubenswrapper[26053]: I0318 09:03:34.245397 26053 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:03:34.245495 master-0 kubenswrapper[26053]: I0318 09:03:34.245470 26053 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:03:34.248780 master-0 kubenswrapper[26053]: I0318 09:03:34.248725 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:03:34.248881 master-0 kubenswrapper[26053]: I0318 09:03:34.248784 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:03:34.248881 master-0 kubenswrapper[26053]: I0318 09:03:34.248823 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:03:34.249013 master-0 kubenswrapper[26053]: I0318 09:03:34.248993 26053 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:03:34.249518 master-0 kubenswrapper[26053]: I0318 09:03:34.249490 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:03:34.249654 master-0 kubenswrapper[26053]: I0318 09:03:34.249633 26053 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:03:34.249761 master-0 kubenswrapper[26053]: I0318 09:03:34.249688 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:03:34.249827 master-0 kubenswrapper[26053]: I0318 09:03:34.249817 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:03:34.253099 master-0 kubenswrapper[26053]: I0318 09:03:34.253055 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:03:34.253174 master-0 kubenswrapper[26053]: I0318 09:03:34.253104 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:03:34.253174 master-0 kubenswrapper[26053]: I0318 09:03:34.253122 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:03:34.253300 master-0 kubenswrapper[26053]: I0318 09:03:34.253271 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:03:34.253300 master-0 kubenswrapper[26053]: I0318 09:03:34.253295 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:03:34.253375 master-0 kubenswrapper[26053]: I0318 09:03:34.253304 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:03:34.253375 master-0 kubenswrapper[26053]: I0318 09:03:34.253315 26053 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:03:34.253602 master-0 kubenswrapper[26053]: I0318 09:03:34.253530 26053 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:03:34.257903 master-0 kubenswrapper[26053]: I0318 09:03:34.257874 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:03:34.257903 master-0 kubenswrapper[26053]: I0318 09:03:34.257894 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:03:34.257903 master-0 kubenswrapper[26053]: I0318 09:03:34.257901 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:03:34.258058 master-0 kubenswrapper[26053]: I0318 09:03:34.258000 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:03:34.258058 master-0 kubenswrapper[26053]: I0318 09:03:34.258030 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:03:34.258058 master-0 kubenswrapper[26053]: I0318 09:03:34.258042 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:03:34.258058 master-0 kubenswrapper[26053]: I0318 09:03:34.258045 26053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4eeb3f8508d8d3c4f3d88616faaf160c40c1688d847f4d4385e29255722ded89" Mar 18 09:03:34.258248 master-0 kubenswrapper[26053]: I0318 09:03:34.258220 26053 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 18 09:03:34.258248 master-0 kubenswrapper[26053]: I0318 09:03:34.258209 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"3ca8adab6e36fc6073de1c6ddada1eb6d6c8531a7b3f49bf5696edf52269053b"} Mar 18 09:03:34.258333 master-0 kubenswrapper[26053]: I0318 09:03:34.258253 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"508ba28f4996f4846c09ffaac0d5fd73f81397921594eed543f49f2663c92153"} Mar 18 09:03:34.258333 master-0 kubenswrapper[26053]: I0318 09:03:34.258263 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"78adc9fceec0398f87741046798ef37a06ff88e851d3911c97f4d19ca0250270"} Mar 18 09:03:34.258333 master-0 kubenswrapper[26053]: I0318 09:03:34.258272 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"1f7b3a7ed16a4b262bbae39dc4d7a6a48993213e9a09aa0191819566831513ec"} Mar 18 09:03:34.258333 master-0 kubenswrapper[26053]: I0318 09:03:34.258279 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"568a67ef6669824859d31edfa49f03a313b1376806d5623e2b85e3955cdc8a8c"} Mar 18 09:03:34.258333 master-0 kubenswrapper[26053]: I0318 09:03:34.258287 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"9cb189c47185ee7666cdc7e6aa936134fd95f8598c903e678c39284b0494bcba"} Mar 18 09:03:34.258333 master-0 kubenswrapper[26053]: I0318 09:03:34.258297 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"c26eb3bf03b5fe4ebeece6b8722b565a3875e9cd3bc4e444bee1b43372467a32"} Mar 18 09:03:34.258333 master-0 kubenswrapper[26053]: I0318 09:03:34.258305 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"e9c6441b6451eb8d4f18b81edc159711a0094c083c79128b3e30069808890f14"} Mar 18 09:03:34.258333 master-0 kubenswrapper[26053]: I0318 09:03:34.258315 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"7f67914817f6c7225b0d1e0411ea65a3f10d85891389f6bc9dcac2e2540a11b6"} Mar 18 09:03:34.258535 master-0 kubenswrapper[26053]: I0318 09:03:34.258340 26053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb9c3d8b42af9b426126b726ec59a1846a0620aa47da4e39676529cdfdcfe989" Mar 18 09:03:34.258535 master-0 kubenswrapper[26053]: I0318 09:03:34.258381 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"7c2aae6fa53257e6d8c7e1c783c29a93037db597eccbd9c6d53d330e1c671296"} Mar 18 09:03:34.258535 master-0 kubenswrapper[26053]: I0318 09:03:34.258389 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerDied","Data":"6d079fc624d85c119aa8e55be99b13072fefe4d01c61b51b27950cfbdef8830f"} Mar 18 09:03:34.258535 master-0 kubenswrapper[26053]: I0318 09:03:34.258397 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"ff2fe1000ac68ef028cd1879d1c2cf197302bdbc2feff6610b1fe10df6c0bb6d"} Mar 18 09:03:34.258535 master-0 kubenswrapper[26053]: I0318 09:03:34.258422 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"e0b37287226cec590faa4200c15d2fef886c4879e12913c9f633d02f362fc880"} Mar 18 09:03:34.258535 master-0 kubenswrapper[26053]: I0318 09:03:34.258430 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"99b2a12b7f88eda209977be842c5d486304d7932fa91b2448c0ff3f2bd17f526"} Mar 18 09:03:34.258535 master-0 kubenswrapper[26053]: I0318 09:03:34.258438 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"d5fdea15855020c7a6ace295d323d168cc8f0fab3f1b0678b2b4dd54d4267ce4"} Mar 18 09:03:34.258535 master-0 kubenswrapper[26053]: I0318 09:03:34.258445 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"b949a573f922a32d775357bfcb732ec6a7748990727195e8e189e898f3802768"} Mar 18 09:03:34.258535 master-0 kubenswrapper[26053]: I0318 09:03:34.258453 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"95378a840215d5780aa88df876aac909","Type":"ContainerStarted","Data":"c361cbba945001e9baf7ce5c31f92c9a1b2e62ac88d976a094c24336f0593c2e"} Mar 18 09:03:34.258535 master-0 kubenswrapper[26053]: I0318 09:03:34.258461 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"95378a840215d5780aa88df876aac909","Type":"ContainerStarted","Data":"bfd8810c464d77aec01f793c5157b8c1f1263372b617d164c28d17b0fec09dfe"} Mar 18 09:03:34.258535 master-0 kubenswrapper[26053]: I0318 09:03:34.258473 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"4fc555cd68d5d190723bdb906f024eca28a915e20d6010038a593dff24a564cd"} Mar 18 09:03:34.258535 master-0 kubenswrapper[26053]: I0318 09:03:34.258481 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"cd8f1b2378c428693218d79b09a56c9b55b51bb98be0e6bcf8f6074d75fc8fec"} Mar 18 09:03:34.258535 master-0 kubenswrapper[26053]: I0318 09:03:34.258489 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"3251669fc25c1285249f7f12096305de8608ba905c5c140d1ecff87834122e35"} Mar 18 09:03:34.258535 master-0 kubenswrapper[26053]: I0318 09:03:34.258498 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"6ee1233c71c5af7063fbaa44082f3735eb10f8e872e4942be3116550d1869f80"} Mar 18 09:03:34.258535 master-0 kubenswrapper[26053]: I0318 09:03:34.258506 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"a31ec04000ce15454ff93da1265406bc1c4ac93430867ce448a6dc1acd1c1097"} Mar 18 09:03:34.258928 master-0 kubenswrapper[26053]: I0318 09:03:34.258553 26053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9783b2b210b8a83d070b181fdbb2da8ba234da764756739e3354c4aa2f2e32b" Mar 18 09:03:34.258928 master-0 kubenswrapper[26053]: I0318 09:03:34.258589 26053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbf3348e82bffe8480be217acc63e599c4842d6df59ff32a187560845a00e908" Mar 18 09:03:34.258928 master-0 kubenswrapper[26053]: I0318 09:03:34.258614 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ac3507630eeeca1ec26dca5ed036e3bb","Type":"ContainerStarted","Data":"b98c563bab7682462c40e7da7e26ff18216a7a69aec7a61033377ca04547a6d0"} Mar 18 09:03:34.258928 master-0 kubenswrapper[26053]: I0318 09:03:34.258623 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ac3507630eeeca1ec26dca5ed036e3bb","Type":"ContainerStarted","Data":"968ddd6b79e0d2d59e973bbf58446176d7d2eb085b2f0904548bb84e2c0df9ca"} Mar 18 09:03:34.258928 master-0 kubenswrapper[26053]: I0318 09:03:34.258631 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ac3507630eeeca1ec26dca5ed036e3bb","Type":"ContainerStarted","Data":"907b93d88590538b76b9f3e15155463f097093d928671e9cdef117547df54cfc"} Mar 18 09:03:34.258928 master-0 kubenswrapper[26053]: I0318 09:03:34.258640 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ac3507630eeeca1ec26dca5ed036e3bb","Type":"ContainerStarted","Data":"51c70143b4526577c0b7ded62575d5c728e020841c7047685596b1ab541784be"} Mar 18 09:03:34.258928 master-0 kubenswrapper[26053]: I0318 09:03:34.258669 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ac3507630eeeca1ec26dca5ed036e3bb","Type":"ContainerStarted","Data":"f7758a746eb691fed619094369d219bf54bedd1f477e550165239c0a34de5c0f"} Mar 18 09:03:34.258928 master-0 kubenswrapper[26053]: I0318 09:03:34.258677 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ac3507630eeeca1ec26dca5ed036e3bb","Type":"ContainerDied","Data":"6334e1fed827ddd985e374f2cd49bc8670ca90ca11daa38cf82b7dd454965666"} Mar 18 09:03:34.258928 master-0 kubenswrapper[26053]: I0318 09:03:34.258686 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ac3507630eeeca1ec26dca5ed036e3bb","Type":"ContainerStarted","Data":"13a068e44f036eb5ea2827a8a27172c655290a87fa0428a7b71b67b8505f2fbb"} Mar 18 09:03:34.258928 master-0 kubenswrapper[26053]: I0318 09:03:34.258697 26053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b023e92f57d6773ebf2508c0ed8826a189d16751fde08444987f384bb9579093" Mar 18 09:03:34.258928 master-0 kubenswrapper[26053]: I0318 09:03:34.258721 26053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="181944668b8a2ce83ab0c8df1ad74ddf1e053adffb02e319eb1d45759d68acf0" Mar 18 09:03:34.258928 master-0 kubenswrapper[26053]: I0318 09:03:34.258744 26053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c615e349deeb331df9e16cf8bf4c467f24a3403e12d87a9f1138c30e06a4c9d2" Mar 18 09:03:34.258928 master-0 kubenswrapper[26053]: I0318 09:03:34.258753 26053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6669c488a020cf374cca62487f896819e27005e13ddd29853b483ea8a721d767" Mar 18 09:03:34.258928 master-0 kubenswrapper[26053]: I0318 09:03:34.258778 26053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3d7e4fd3a2cab558b1ebece0211a1e0de8af572fefd420da566dc2b08839acd" Mar 18 09:03:34.258928 master-0 kubenswrapper[26053]: I0318 09:03:34.258799 26053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2bd81df931b251c8d36514f9c347cc536878690477cb5bf137fec13c0335990" Mar 18 09:03:34.261277 master-0 kubenswrapper[26053]: I0318 09:03:34.261237 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 18 09:03:34.261347 master-0 kubenswrapper[26053]: I0318 09:03:34.261290 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 18 09:03:34.261347 master-0 kubenswrapper[26053]: I0318 09:03:34.261309 26053 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 18 09:03:38.167150 master-0 kubenswrapper[26053]: I0318 09:03:38.167078 26053 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 09:03:38.167740 master-0 kubenswrapper[26053]: I0318 09:03:38.167313 26053 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 09:03:38.167740 master-0 kubenswrapper[26053]: I0318 09:03:38.167513 26053 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 09:03:38.175214 master-0 kubenswrapper[26053]: I0318 09:03:38.175119 26053 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 18 09:03:38.175914 master-0 kubenswrapper[26053]: I0318 09:03:38.175882 26053 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 09:03:38.275488 master-0 kubenswrapper[26053]: I0318 09:03:38.275423 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:38.275716 master-0 kubenswrapper[26053]: I0318 09:03:38.275497 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:03:38.275716 master-0 kubenswrapper[26053]: I0318 09:03:38.275582 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:03:38.275716 master-0 kubenswrapper[26053]: I0318 09:03:38.275633 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:03:38.275815 master-0 kubenswrapper[26053]: I0318 09:03:38.275773 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:03:38.275848 master-0 kubenswrapper[26053]: I0318 09:03:38.275826 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:38.275885 master-0 kubenswrapper[26053]: I0318 09:03:38.275861 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:38.275915 master-0 kubenswrapper[26053]: I0318 09:03:38.275895 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:38.275948 master-0 kubenswrapper[26053]: I0318 09:03:38.275936 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:03:38.275981 master-0 kubenswrapper[26053]: I0318 09:03:38.275963 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:03:38.276022 master-0 kubenswrapper[26053]: I0318 09:03:38.275996 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:03:38.276064 master-0 kubenswrapper[26053]: I0318 09:03:38.276025 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:38.276064 master-0 kubenswrapper[26053]: I0318 09:03:38.276047 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:38.276127 master-0 kubenswrapper[26053]: I0318 09:03:38.276067 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:38.276127 master-0 kubenswrapper[26053]: I0318 09:03:38.276088 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:38.276127 master-0 kubenswrapper[26053]: I0318 09:03:38.276111 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:03:38.276296 master-0 kubenswrapper[26053]: I0318 09:03:38.276132 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:03:38.276296 master-0 kubenswrapper[26053]: I0318 09:03:38.276157 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"ac3507630eeeca1ec26dca5ed036e3bb\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:38.276296 master-0 kubenswrapper[26053]: I0318 09:03:38.276176 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"ac3507630eeeca1ec26dca5ed036e3bb\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:38.276296 master-0 kubenswrapper[26053]: I0318 09:03:38.276214 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:38.276296 master-0 kubenswrapper[26053]: I0318 09:03:38.276247 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:03:38.276296 master-0 kubenswrapper[26053]: I0318 09:03:38.276288 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"ac3507630eeeca1ec26dca5ed036e3bb\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:38.276500 master-0 kubenswrapper[26053]: I0318 09:03:38.276362 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:38.376832 master-0 kubenswrapper[26053]: I0318 09:03:38.376771 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:03:38.376929 master-0 kubenswrapper[26053]: I0318 09:03:38.376878 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:03:38.377018 master-0 kubenswrapper[26053]: I0318 09:03:38.376963 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:03:38.377018 master-0 kubenswrapper[26053]: I0318 09:03:38.377008 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:03:38.377080 master-0 kubenswrapper[26053]: I0318 09:03:38.377008 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:03:38.377080 master-0 kubenswrapper[26053]: I0318 09:03:38.377049 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:03:38.377080 master-0 kubenswrapper[26053]: I0318 09:03:38.377058 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:38.377173 master-0 kubenswrapper[26053]: I0318 09:03:38.377084 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:38.377173 master-0 kubenswrapper[26053]: I0318 09:03:38.377087 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:38.377173 master-0 kubenswrapper[26053]: I0318 09:03:38.377108 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:38.377173 master-0 kubenswrapper[26053]: I0318 09:03:38.377131 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:03:38.377173 master-0 kubenswrapper[26053]: I0318 09:03:38.377143 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:38.377173 master-0 kubenswrapper[26053]: I0318 09:03:38.377157 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:03:38.377323 master-0 kubenswrapper[26053]: I0318 09:03:38.377178 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:38.377323 master-0 kubenswrapper[26053]: I0318 09:03:38.377197 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:03:38.377323 master-0 kubenswrapper[26053]: I0318 09:03:38.377213 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:03:38.377323 master-0 kubenswrapper[26053]: I0318 09:03:38.377215 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:38.377323 master-0 kubenswrapper[26053]: I0318 09:03:38.377238 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"ac3507630eeeca1ec26dca5ed036e3bb\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:38.377323 master-0 kubenswrapper[26053]: I0318 09:03:38.377245 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:38.377323 master-0 kubenswrapper[26053]: I0318 09:03:38.377253 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:38.377323 master-0 kubenswrapper[26053]: I0318 09:03:38.377294 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:38.377323 master-0 kubenswrapper[26053]: I0318 09:03:38.377315 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:38.377323 master-0 kubenswrapper[26053]: I0318 09:03:38.377330 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:38.377622 master-0 kubenswrapper[26053]: I0318 09:03:38.377354 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"ac3507630eeeca1ec26dca5ed036e3bb\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:38.377622 master-0 kubenswrapper[26053]: I0318 09:03:38.377365 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:03:38.377622 master-0 kubenswrapper[26053]: I0318 09:03:38.377389 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:38.377622 master-0 kubenswrapper[26053]: I0318 09:03:38.377396 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:03:38.377622 master-0 kubenswrapper[26053]: I0318 09:03:38.377474 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:38.377622 master-0 kubenswrapper[26053]: I0318 09:03:38.377524 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"ac3507630eeeca1ec26dca5ed036e3bb\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:38.377622 master-0 kubenswrapper[26053]: I0318 09:03:38.377547 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:03:38.377622 master-0 kubenswrapper[26053]: I0318 09:03:38.377550 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:38.377622 master-0 kubenswrapper[26053]: I0318 09:03:38.377557 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"ac3507630eeeca1ec26dca5ed036e3bb\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:38.377622 master-0 kubenswrapper[26053]: I0318 09:03:38.377604 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:38.377622 master-0 kubenswrapper[26053]: I0318 09:03:38.377625 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:03:38.377947 master-0 kubenswrapper[26053]: I0318 09:03:38.377648 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:03:38.377947 master-0 kubenswrapper[26053]: I0318 09:03:38.377681 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:03:38.377947 master-0 kubenswrapper[26053]: I0318 09:03:38.377689 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"ac3507630eeeca1ec26dca5ed036e3bb\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:38.377947 master-0 kubenswrapper[26053]: I0318 09:03:38.377716 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"ac3507630eeeca1ec26dca5ed036e3bb\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:38.377947 master-0 kubenswrapper[26053]: I0318 09:03:38.377731 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:38.377947 master-0 kubenswrapper[26053]: I0318 09:03:38.377754 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:38.377947 master-0 kubenswrapper[26053]: I0318 09:03:38.377770 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:03:38.377947 master-0 kubenswrapper[26053]: I0318 09:03:38.377789 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:03:38.377947 master-0 kubenswrapper[26053]: I0318 09:03:38.377824 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"95378a840215d5780aa88df876aac909\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:38.377947 master-0 kubenswrapper[26053]: I0318 09:03:38.377829 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:03:38.377947 master-0 kubenswrapper[26053]: I0318 09:03:38.377841 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:38.377947 master-0 kubenswrapper[26053]: I0318 09:03:38.377851 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 18 09:03:38.459256 master-0 kubenswrapper[26053]: I0318 09:03:38.459185 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 18 09:03:38.462681 master-0 kubenswrapper[26053]: I0318 09:03:38.462639 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:38.462847 master-0 kubenswrapper[26053]: I0318 09:03:38.462808 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:38.474123 master-0 kubenswrapper[26053]: I0318 09:03:38.474070 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 18 09:03:38.675432 master-0 kubenswrapper[26053]: I0318 09:03:38.675372 26053 apiserver.go:52] "Watching apiserver" Mar 18 09:03:38.692552 master-0 kubenswrapper[26053]: I0318 09:03:38.692487 26053 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 09:03:38.700724 master-0 kubenswrapper[26053]: I0318 09:03:38.700652 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc","openshift-kube-scheduler/installer-4-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl","openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc","openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6","openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh","openshift-marketplace/redhat-marketplace-2gpbt","openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5","openshift-kube-apiserver/installer-1-retry-1-master-0","openshift-kube-controller-manager/installer-1-master-0","openshift-kube-scheduler/installer-4-retry-1-master-0","openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n","openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx","kube-system/bootstrap-kube-scheduler-master-0","openshift-cluster-version/cluster-version-operator-7d58488df-q58jp","openshift-controller-manager/controller-manager-7d954fcfb-gpddv","openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h","openshift-multus/multus-h7vq8","openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27","openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p","openshift-etcd/installer-2-master-0","openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf","openshift-monitoring/node-exporter-kp8pg","openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq","openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9","openshift-ingress-canary/ingress-canary-226gc","openshift-insights/insights-operator-68bf6ff9d6-89rtc","openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw","openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4","openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr","openshift-ovn-kubernetes/ovnkube-node-6ff5l","openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l","openshift-dns/node-resolver-thqlt","openshift-multus/network-metrics-daemon-2xs9n","openshift-network-diagnostics/network-check-source-b4bf74f6-7zvkl","openshift-kube-scheduler/installer-3-master-0","openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd","openshift-marketplace/marketplace-operator-89ccd998f-m862c","openshift-monitoring/metrics-server-7875f64c8-kmr8t","openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9","assisted-installer/assisted-installer-controller-tjfg6","kube-system/bootstrap-kube-controller-manager-master-0","openshift-cluster-node-tuning-operator/tuned-84qxz","openshift-service-ca/service-ca-79bc6b8d76-fhj95","openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl","openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-s98kp","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k","openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r","openshift-etcd/etcd-master-0","openshift-network-operator/iptables-alerter-vr4gq","openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r","openshift-kube-apiserver/installer-1-retry-2-master-0","openshift-marketplace/redhat-operators-4r6jd","openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5","openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv","openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d","openshift-marketplace/community-operators-nfdcz","openshift-multus/multus-additional-cni-plugins-68tmr","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp","openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr","openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf","openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4jrzp","openshift-network-diagnostics/network-check-target-7r2q2","openshift-apiserver/apiserver-77f845f574-2wpgz","openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw","openshift-ingress/router-default-7dcf5569b5-sgsmn","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-lhcpp","openshift-dns/dns-default-pj485","openshift-etcd/installer-1-master-0","openshift-kube-storage-version-migrator/migrator-8487694857-sbsqg","openshift-machine-config-operator/machine-config-daemon-rhm2f","openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xjbb5","openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-9f7lz","openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62","openshift-machine-config-operator/machine-config-server-rw7hw","openshift-network-node-identity/network-node-identity-lf7kq","openshift-network-operator/network-operator-7bd846bfc4-6rtpx","openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm","openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr","openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg","openshift-dns-operator/dns-operator-9c5679d8f-2649q","openshift-kube-apiserver/installer-1-master-0","openshift-marketplace/certified-operators-5x8lj"] Mar 18 09:03:38.710649 master-0 kubenswrapper[26053]: I0318 09:03:38.705881 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-tjfg6" Mar 18 09:03:38.735045 master-0 kubenswrapper[26053]: I0318 09:03:38.734966 26053 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="c47ac101-a848-4f5e-a03d-3382567e0d85" Mar 18 09:03:38.736771 master-0 kubenswrapper[26053]: I0318 09:03:38.736699 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 09:03:38.736993 master-0 kubenswrapper[26053]: I0318 09:03:38.736969 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 09:03:38.737297 master-0 kubenswrapper[26053]: I0318 09:03:38.737273 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 09:03:38.737414 master-0 kubenswrapper[26053]: I0318 09:03:38.737385 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 09:03:38.737851 master-0 kubenswrapper[26053]: I0318 09:03:38.737828 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 09:03:38.737927 master-0 kubenswrapper[26053]: I0318 09:03:38.737908 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 09:03:38.737971 master-0 kubenswrapper[26053]: I0318 09:03:38.737949 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 09:03:38.738000 master-0 kubenswrapper[26053]: I0318 09:03:38.737984 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 09:03:38.738028 master-0 kubenswrapper[26053]: I0318 09:03:38.737389 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 18 09:03:38.738056 master-0 kubenswrapper[26053]: I0318 09:03:38.738047 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 09:03:38.738141 master-0 kubenswrapper[26053]: I0318 09:03:38.738120 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 09:03:38.738335 master-0 kubenswrapper[26053]: I0318 09:03:38.738312 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 09:03:38.738374 master-0 kubenswrapper[26053]: I0318 09:03:38.737835 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 09:03:38.738438 master-0 kubenswrapper[26053]: I0318 09:03:38.738421 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 09:03:38.738468 master-0 kubenswrapper[26053]: I0318 09:03:38.738458 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 09:03:38.738543 master-0 kubenswrapper[26053]: I0318 09:03:38.738527 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 09:03:38.738601 master-0 kubenswrapper[26053]: I0318 09:03:38.738586 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 09:03:38.741334 master-0 kubenswrapper[26053]: I0318 09:03:38.738670 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 09:03:38.749237 master-0 kubenswrapper[26053]: I0318 09:03:38.739852 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 18 09:03:38.749237 master-0 kubenswrapper[26053]: I0318 09:03:38.739861 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 18 09:03:38.749237 master-0 kubenswrapper[26053]: I0318 09:03:38.740036 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 18 09:03:38.749997 master-0 kubenswrapper[26053]: I0318 09:03:38.738785 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 09:03:38.750135 master-0 kubenswrapper[26053]: I0318 09:03:38.738830 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 09:03:38.750203 master-0 kubenswrapper[26053]: I0318 09:03:38.738864 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 09:03:38.750244 master-0 kubenswrapper[26053]: I0318 09:03:38.738873 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 09:03:38.750310 master-0 kubenswrapper[26053]: I0318 09:03:38.738924 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 09:03:38.750413 master-0 kubenswrapper[26053]: I0318 09:03:38.738955 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 09:03:38.750450 master-0 kubenswrapper[26053]: I0318 09:03:38.741283 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 09:03:38.750450 master-0 kubenswrapper[26053]: I0318 09:03:38.743083 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 09:03:38.750502 master-0 kubenswrapper[26053]: I0318 09:03:38.745288 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 09:03:38.750644 master-0 kubenswrapper[26053]: I0318 09:03:38.748452 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 09:03:38.756524 master-0 kubenswrapper[26053]: I0318 09:03:38.756427 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 09:03:38.760503 master-0 kubenswrapper[26053]: I0318 09:03:38.760467 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 09:03:38.761724 master-0 kubenswrapper[26053]: I0318 09:03:38.761678 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 09:03:38.762306 master-0 kubenswrapper[26053]: I0318 09:03:38.762256 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 18 09:03:38.762418 master-0 kubenswrapper[26053]: I0318 09:03:38.762390 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 09:03:38.762502 master-0 kubenswrapper[26053]: I0318 09:03:38.762481 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 09:03:38.762687 master-0 kubenswrapper[26053]: I0318 09:03:38.762631 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 09:03:38.762750 master-0 kubenswrapper[26053]: I0318 09:03:38.762740 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 09:03:38.762783 master-0 kubenswrapper[26053]: I0318 09:03:38.762756 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 09:03:38.763161 master-0 kubenswrapper[26053]: I0318 09:03:38.762913 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 09:03:38.763161 master-0 kubenswrapper[26053]: I0318 09:03:38.763047 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 09:03:38.763255 master-0 kubenswrapper[26053]: I0318 09:03:38.763186 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 09:03:38.763284 master-0 kubenswrapper[26053]: I0318 09:03:38.763252 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 09:03:38.763284 master-0 kubenswrapper[26053]: I0318 09:03:38.763278 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 18 09:03:38.763476 master-0 kubenswrapper[26053]: I0318 09:03:38.763440 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 09:03:38.763543 master-0 kubenswrapper[26053]: I0318 09:03:38.763534 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 09:03:38.763606 master-0 kubenswrapper[26053]: I0318 09:03:38.763576 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 09:03:38.763647 master-0 kubenswrapper[26053]: I0318 09:03:38.763617 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 09:03:38.763841 master-0 kubenswrapper[26053]: I0318 09:03:38.763774 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 09:03:38.764109 master-0 kubenswrapper[26053]: I0318 09:03:38.763862 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 09:03:38.764109 master-0 kubenswrapper[26053]: I0318 09:03:38.763954 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 09:03:38.764305 master-0 kubenswrapper[26053]: I0318 09:03:38.764262 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 09:03:38.764726 master-0 kubenswrapper[26053]: I0318 09:03:38.764704 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 09:03:38.764948 master-0 kubenswrapper[26053]: I0318 09:03:38.764896 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 09:03:38.764991 master-0 kubenswrapper[26053]: I0318 09:03:38.762641 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 09:03:38.765187 master-0 kubenswrapper[26053]: I0318 09:03:38.765171 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 09:03:38.765850 master-0 kubenswrapper[26053]: I0318 09:03:38.765730 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 09:03:38.765962 master-0 kubenswrapper[26053]: I0318 09:03:38.765941 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 09:03:38.766103 master-0 kubenswrapper[26053]: I0318 09:03:38.765977 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 09:03:38.766352 master-0 kubenswrapper[26053]: I0318 09:03:38.766330 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 09:03:38.766600 master-0 kubenswrapper[26053]: I0318 09:03:38.766483 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 18 09:03:38.766691 master-0 kubenswrapper[26053]: I0318 09:03:38.766669 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 09:03:38.766796 master-0 kubenswrapper[26053]: I0318 09:03:38.766775 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 09:03:38.766889 master-0 kubenswrapper[26053]: I0318 09:03:38.766874 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 09:03:38.767124 master-0 kubenswrapper[26053]: I0318 09:03:38.767107 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 09:03:38.767193 master-0 kubenswrapper[26053]: I0318 09:03:38.767157 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 09:03:38.767339 master-0 kubenswrapper[26053]: I0318 09:03:38.767287 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 09:03:38.767339 master-0 kubenswrapper[26053]: I0318 09:03:38.767330 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 18 09:03:38.767446 master-0 kubenswrapper[26053]: I0318 09:03:38.767366 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 09:03:38.767446 master-0 kubenswrapper[26053]: I0318 09:03:38.767392 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 09:03:38.771273 master-0 kubenswrapper[26053]: I0318 09:03:38.767618 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 09:03:38.771273 master-0 kubenswrapper[26053]: I0318 09:03:38.767674 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 09:03:38.771273 master-0 kubenswrapper[26053]: I0318 09:03:38.767721 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 09:03:38.771273 master-0 kubenswrapper[26053]: I0318 09:03:38.767748 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 09:03:38.771273 master-0 kubenswrapper[26053]: I0318 09:03:38.767783 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 09:03:38.771273 master-0 kubenswrapper[26053]: I0318 09:03:38.767865 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 09:03:38.771273 master-0 kubenswrapper[26053]: I0318 09:03:38.767897 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 09:03:38.771273 master-0 kubenswrapper[26053]: I0318 09:03:38.767922 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 09:03:38.771273 master-0 kubenswrapper[26053]: I0318 09:03:38.767961 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 09:03:38.771273 master-0 kubenswrapper[26053]: I0318 09:03:38.768004 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 09:03:38.771273 master-0 kubenswrapper[26053]: I0318 09:03:38.768117 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 09:03:38.771273 master-0 kubenswrapper[26053]: I0318 09:03:38.768152 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 09:03:38.771273 master-0 kubenswrapper[26053]: I0318 09:03:38.768278 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 18 09:03:38.771273 master-0 kubenswrapper[26053]: I0318 09:03:38.769342 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 09:03:38.771273 master-0 kubenswrapper[26053]: I0318 09:03:38.769522 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 09:03:38.771273 master-0 kubenswrapper[26053]: I0318 09:03:38.769654 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 09:03:38.771273 master-0 kubenswrapper[26053]: I0318 09:03:38.769700 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 09:03:38.771273 master-0 kubenswrapper[26053]: I0318 09:03:38.769817 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 09:03:38.771273 master-0 kubenswrapper[26053]: I0318 09:03:38.769838 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 09:03:38.771273 master-0 kubenswrapper[26053]: I0318 09:03:38.769944 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 09:03:38.771273 master-0 kubenswrapper[26053]: I0318 09:03:38.770057 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 18 09:03:38.771273 master-0 kubenswrapper[26053]: I0318 09:03:38.770079 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 09:03:38.771273 master-0 kubenswrapper[26053]: I0318 09:03:38.770125 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 09:03:38.771273 master-0 kubenswrapper[26053]: I0318 09:03:38.770145 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 09:03:38.772294 master-0 kubenswrapper[26053]: I0318 09:03:38.771967 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 18 09:03:38.772594 master-0 kubenswrapper[26053]: I0318 09:03:38.772496 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 09:03:38.773940 master-0 kubenswrapper[26053]: I0318 09:03:38.773577 26053 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 18 09:03:38.775229 master-0 kubenswrapper[26053]: I0318 09:03:38.775193 26053 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 18 09:03:38.781027 master-0 kubenswrapper[26053]: I0318 09:03:38.780742 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 09:03:38.781027 master-0 kubenswrapper[26053]: I0318 09:03:38.780783 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 18 09:03:38.781027 master-0 kubenswrapper[26053]: I0318 09:03:38.780919 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 18 09:03:38.781832 master-0 kubenswrapper[26053]: I0318 09:03:38.781799 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 18 09:03:38.782035 master-0 kubenswrapper[26053]: I0318 09:03:38.782009 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:03:38.783403 master-0 kubenswrapper[26053]: I0318 09:03:38.782969 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 09:03:38.784361 master-0 kubenswrapper[26053]: I0318 09:03:38.784329 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 09:03:38.789207 master-0 kubenswrapper[26053]: I0318 09:03:38.789145 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 09:03:38.789931 master-0 kubenswrapper[26053]: I0318 09:03:38.789868 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 09:03:38.790206 master-0 kubenswrapper[26053]: I0318 09:03:38.790166 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 09:03:38.790286 master-0 kubenswrapper[26053]: I0318 09:03:38.790252 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 09:03:38.791216 master-0 kubenswrapper[26053]: I0318 09:03:38.791184 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 18 09:03:38.791410 master-0 kubenswrapper[26053]: I0318 09:03:38.791373 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49fac1b46a11e49501805e891baae4a9" path="/var/lib/kubelet/pods/49fac1b46a11e49501805e891baae4a9/volumes" Mar 18 09:03:38.791777 master-0 kubenswrapper[26053]: I0318 09:03:38.791748 26053 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 18 09:03:38.796806 master-0 kubenswrapper[26053]: I0318 09:03:38.796766 26053 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 18 09:03:38.807775 master-0 kubenswrapper[26053]: I0318 09:03:38.807746 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 09:03:38.828056 master-0 kubenswrapper[26053]: I0318 09:03:38.827915 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 09:03:38.852644 master-0 kubenswrapper[26053]: I0318 09:03:38.851183 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 09:03:38.866587 master-0 kubenswrapper[26053]: I0318 09:03:38.866526 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882053 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k22wv\" (UniqueName: \"kubernetes.io/projected/e88b021c-c810-4a68-aa48-d8666b52330e-kube-api-access-k22wv\") pod \"cluster-autoscaler-operator-866dc4744-tx2pv\" (UID: \"e88b021c-c810-4a68-aa48-d8666b52330e\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882109 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/5f827195-f68d-4bd2-865b-a1f041a5c73e-operand-assets\") pod \"cluster-olm-operator-67dcd4998-6gj8k\" (UID: \"5f827195-f68d-4bd2-865b-a1f041a5c73e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882140 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882172 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-ovn\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882196 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882220 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/800297fe-77fd-4f58-ade2-32a147cd7d5c-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882244 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-env-overrides\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882267 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-sysconfig\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882290 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-modprobe-d\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882312 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls\") pod \"dns-operator-9c5679d8f-2649q\" (UID: \"4192ea44-a38c-4b70-93c3-8070da2ffe2f\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882344 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f827195-f68d-4bd2-865b-a1f041a5c73e-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-6gj8k\" (UID: \"5f827195-f68d-4bd2-865b-a1f041a5c73e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882368 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdcd27a4-6d46-47af-a14a-65f6501c10f0-config\") pod \"machine-approver-5c6485487f-r4mv6\" (UID: \"cdcd27a4-6d46-47af-a14a-65f6501c10f0\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882388 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-os-release\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882419 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/411d544f-e105-44f0-927a-f61406b3f070-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882441 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/599418d3-6afa-46ab-9afa-659134f7ac94-root\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882465 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/93cb5ef1-e8f1-4d11-8c93-1abf24626176-default-certificate\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882493 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-config\") pod \"openshift-controller-manager-operator-8c94f4649-2g6x9\" (UID: \"0f6a7f55-84bd-4ea5-8248-4cb565904c3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882517 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882549 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882589 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fdb52116-9c55-4464-99c8-fc2e4559996b-images\") pod \"machine-api-operator-6fbb6cf6f9-n4t2h\" (UID: \"fdb52116-9c55-4464-99c8-fc2e4559996b\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882611 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f2b373-0c85-4028-9089-9e9dff5d37b5-serving-cert\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882632 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/15b6612f-3a51-4a67-a566-8c520f85c6c2-audit-policies\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882652 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b7ac7ef-060f-45d2-8988-006d45402e00-serving-cert\") pod \"route-controller-manager-7dbcb47f86-ptccg\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882672 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzrxv\" (UniqueName: \"kubernetes.io/projected/fdb52116-9c55-4464-99c8-fc2e4559996b-kube-api-access-xzrxv\") pod \"machine-api-operator-6fbb6cf6f9-n4t2h\" (UID: \"fdb52116-9c55-4464-99c8-fc2e4559996b\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882697 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-server-tls\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882722 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1f2b373-0c85-4028-9089-9e9dff5d37b5-audit-dir\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882763 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/995ec82c-b593-416a-9287-6020a484855c-catalog-content\") pod \"redhat-operators-4r6jd\" (UID: \"995ec82c-b593-416a-9287-6020a484855c\") " pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882787 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7cac1300-44c1-4a7d-8d14-efa9702ad9df-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882811 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-run\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882835 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhdc2\" (UniqueName: \"kubernetes.io/projected/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-kube-api-access-vhdc2\") pod \"machine-config-daemon-rhm2f\" (UID: \"a7cf2cff-ca67-4cc6-99e7-99478ab89af4\") " pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882858 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovn-node-metrics-cert\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882880 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-etcd-serving-ca\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882902 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/800297fe-77fd-4f58-ade2-32a147cd7d5c-cache\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882926 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-wtmp\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.882984 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/95143c61-6f91-4cd4-9411-31c2fb75d4d0-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-whh6r\" (UID: \"95143c61-6f91-4cd4-9411-31c2fb75d4d0\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883008 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-tuned\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883032 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-multus-certs\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883060 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5svd\" (UniqueName: \"kubernetes.io/projected/af1fbcf2-d4de-4015-89fc-2565e855a04d-kube-api-access-r5svd\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883082 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovnkube-script-lib\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883105 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883129 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnfwv\" (UniqueName: \"kubernetes.io/projected/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-kube-api-access-lnfwv\") pod \"openshift-controller-manager-operator-8c94f4649-2g6x9\" (UID: \"0f6a7f55-84bd-4ea5-8248-4cb565904c3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883158 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zx99\" (UniqueName: \"kubernetes.io/projected/c6176328-5931-405b-8519-8e4bc83bedfb-kube-api-access-5zx99\") pod \"migrator-8487694857-sbsqg\" (UID: \"c6176328-5931-405b-8519-8e4bc83bedfb\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-sbsqg" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883182 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-etcd-client\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883206 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9cc640bf-cb5f-4493-b47b-6ea6f524525e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883230 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnn98\" (UniqueName: \"kubernetes.io/projected/bef948b9-eef4-404b-9b49-6e4a2ceea73b-kube-api-access-mnn98\") pod \"machine-config-operator-84d549f6d5-vj84b\" (UID: \"bef948b9-eef4-404b-9b49-6e4a2ceea73b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883253 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-config\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883278 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/cdcd27a4-6d46-47af-a14a-65f6501c10f0-machine-approver-tls\") pod \"machine-approver-5c6485487f-r4mv6\" (UID: \"cdcd27a4-6d46-47af-a14a-65f6501c10f0\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883300 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2fcd92f-0a58-4c87-8213-715453486aca-catalog-content\") pod \"certified-operators-5x8lj\" (UID: \"f2fcd92f-0a58-4c87-8213-715453486aca\") " pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883331 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bef948b9-eef4-404b-9b49-6e4a2ceea73b-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-vj84b\" (UID: \"bef948b9-eef4-404b-9b49-6e4a2ceea73b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883352 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-serving-cert\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883376 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/599418d3-6afa-46ab-9afa-659134f7ac94-metrics-client-ca\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883398 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddsnb\" (UniqueName: \"kubernetes.io/projected/d7205eeb-912b-4c31-b08f-ed0b2a1319aa-kube-api-access-ddsnb\") pod \"machine-config-controller-b4f87c5b9-prrnd\" (UID: \"d7205eeb-912b-4c31-b08f-ed0b2a1319aa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883421 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdsp\" (UniqueName: \"kubernetes.io/projected/e86268c9-7a83-4ccb-979a-feff00cb4b3e-kube-api-access-ptdsp\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883447 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7cac1300-44c1-4a7d-8d14-efa9702ad9df-env-overrides\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883470 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883503 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-config\") pod \"openshift-apiserver-operator-d65958b8-m8p9p\" (UID: \"81eefe1b-f683-4740-8fb0-0a5050f9b4a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883527 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/15b6612f-3a51-4a67-a566-8c520f85c6c2-audit-dir\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883549 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5nwv\" (UniqueName: \"kubernetes.io/projected/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd-kube-api-access-j5nwv\") pod \"ingress-canary-226gc\" (UID: \"9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd\") " pod="openshift-ingress-canary/ingress-canary-226gc" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883587 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b7ac7ef-060f-45d2-8988-006d45402e00-config\") pod \"route-controller-manager-7dbcb47f86-ptccg\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883618 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-host\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883642 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cni-binary-copy\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883667 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/cdf1c657-a9dc-455a-b2fd-27a518bc5199-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-4jrzp\" (UID: \"cdf1c657-a9dc-455a-b2fd-27a518bc5199\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4jrzp" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883691 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77sfj\" (UniqueName: \"kubernetes.io/projected/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-kube-api-access-77sfj\") pod \"multus-admission-controller-5dbbb8b86f-25rbq\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 09:03:38.883630 master-0 kubenswrapper[26053]: I0318 09:03:38.883716 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93cb5ef1-e8f1-4d11-8c93-1abf24626176-metrics-certs\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.883739 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f918d08d-df7c-4e8d-85ba-1c92d766db16-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.883764 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/411d544f-e105-44f0-927a-f61406b3f070-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.883787 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf7a3329-a04c-4b58-9364-b907c00cbe08-trusted-ca\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.883812 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm2rc\" (UniqueName: \"kubernetes.io/projected/c5c995cf-40a0-4cd6-87fa-96a522f7bc57-kube-api-access-rm2rc\") pod \"csi-snapshot-controller-operator-5f5d689c6b-lhcpp\" (UID: \"c5c995cf-40a0-4cd6-87fa-96a522f7bc57\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-lhcpp" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.883840 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csfl2\" (UniqueName: \"kubernetes.io/projected/2a864188-ada6-4ec2-bf9f-72dab210f0ce-kube-api-access-csfl2\") pod \"cluster-storage-operator-7d87854d6-9f7lz\" (UID: \"2a864188-ada6-4ec2-bf9f-72dab210f0ce\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-9f7lz" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.883868 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2fcd92f-0a58-4c87-8213-715453486aca-utilities\") pod \"certified-operators-5x8lj\" (UID: \"f2fcd92f-0a58-4c87-8213-715453486aca\") " pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.883891 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-rootfs\") pod \"machine-config-daemon-rhm2f\" (UID: \"a7cf2cff-ca67-4cc6-99e7-99478ab89af4\") " pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.883911 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-netns\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.883934 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovnkube-config\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.883958 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-image-import-ca\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.883983 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk4w7\" (UniqueName: \"kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7\") pod \"network-check-target-7r2q2\" (UID: \"f198f770-5483-4499-abb6-06026f2c6b37\") " pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884006 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/09269324-c908-474d-818f-5cd49406f1e2-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884031 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkx4s\" (UniqueName: \"kubernetes.io/projected/7b7ac7ef-060f-45d2-8988-006d45402e00-kube-api-access-qkx4s\") pod \"route-controller-manager-7dbcb47f86-ptccg\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884053 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-var-lib-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884074 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-var-lock\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884098 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15b6612f-3a51-4a67-a566-8c520f85c6c2-trusted-ca-bundle\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884122 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-tls\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884147 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-config\") pod \"service-ca-operator-b865698dc-fhlfx\" (UID: \"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884170 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-run-netns\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884194 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert\") pod \"catalog-operator-68f85b4d6c-hhn7l\" (UID: \"f6833a48-fccb-42bd-ac90-29f08d5bf7e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884216 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-client\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884238 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884262 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65cff83a-8d8f-4e4f-96ef-99941c29ba53-config\") pod \"kube-apiserver-operator-8b68b9d9b-pp4r9\" (UID: \"65cff83a-8d8f-4e4f-96ef-99941c29ba53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884285 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-apiservice-cert\") pod \"packageserver-c8d87f55b-gsv6r\" (UID: \"50a2c23f-26af-4c7f-8ea6-996bcfe173d0\") " pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884308 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c5e43736-33c3-4949-98ca-971332541d64-hosts-file\") pod \"node-resolver-thqlt\" (UID: \"c5e43736-33c3-4949-98ca-971332541d64\") " pod="openshift-dns/node-resolver-thqlt" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884330 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert\") pod \"olm-operator-5c9796789-twp27\" (UID: \"c00ee838-424f-482b-942f-08f0952a5ccd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884354 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-kubelet\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884380 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65cff83a-8d8f-4e4f-96ef-99941c29ba53-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-pp4r9\" (UID: \"65cff83a-8d8f-4e4f-96ef-99941c29ba53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884402 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884426 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c322813-b574-4b46-b760-208ccecd01a5-utilities\") pod \"community-operators-nfdcz\" (UID: \"1c322813-b574-4b46-b760-208ccecd01a5\") " pod="openshift-marketplace/community-operators-nfdcz" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884450 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z98qs\" (UniqueName: \"kubernetes.io/projected/3898c28b-69b0-46af-b085-37e12d7d80ba-kube-api-access-z98qs\") pod \"cluster-samples-operator-85f7577d78-xjbb5\" (UID: \"3898c28b-69b0-46af-b085-37e12d7d80ba\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xjbb5" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884472 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-env-overrides\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884494 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a1f2b373-0c85-4028-9089-9e9dff5d37b5-encryption-config\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884517 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-ovnkube-identity-cm\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884540 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-os-release\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884579 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884613 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-systemd-units\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884635 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884662 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lczj8\" (UniqueName: \"kubernetes.io/projected/a1f2b373-0c85-4028-9089-9e9dff5d37b5-kube-api-access-lczj8\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884686 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884712 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e86268c9-7a83-4ccb-979a-feff00cb4b3e-serving-cert\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884738 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkw45\" (UniqueName: \"kubernetes.io/projected/2d0da6e3-3887-4361-8eae-e7447f9ff72c-kube-api-access-xkw45\") pod \"package-server-manager-7b95f86987-k6xp5\" (UID: \"2d0da6e3-3887-4361-8eae-e7447f9ff72c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884763 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884792 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884811 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/25781967-12ce-490e-94aa-9b9722f495da-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-s98kp\" (UID: \"25781967-12ce-490e-94aa-9b9722f495da\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-s98kp" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884836 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/411d544f-e105-44f0-927a-f61406b3f070-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884860 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqjsq\" (UniqueName: \"kubernetes.io/projected/c5e43736-33c3-4949-98ca-971332541d64-kube-api-access-sqjsq\") pod \"node-resolver-thqlt\" (UID: \"c5e43736-33c3-4949-98ca-971332541d64\") " pod="openshift-dns/node-resolver-thqlt" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884884 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-25rbq\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884912 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqgbr\" (UniqueName: \"kubernetes.io/projected/2b59dbf5-0a61-4981-aed3-e73550615c4a-kube-api-access-nqgbr\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884938 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-xlfrc\" (UID: \"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884961 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-sys\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.884983 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-conf-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.885006 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.885024 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-audit\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.885045 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-encryption-config\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 09:03:38.885407 master-0 kubenswrapper[26053]: I0318 09:03:38.885463 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c56e1ac-8752-4e46-8692-93716087f0e0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 09:03:38.886867 master-0 kubenswrapper[26053]: I0318 09:03:38.885532 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c00ee838-424f-482b-942f-08f0952a5ccd-srv-cert\") pod \"olm-operator-5c9796789-twp27\" (UID: \"c00ee838-424f-482b-942f-08f0952a5ccd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 09:03:38.886867 master-0 kubenswrapper[26053]: I0318 09:03:38.885553 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/800297fe-77fd-4f58-ade2-32a147cd7d5c-cache\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 09:03:38.886867 master-0 kubenswrapper[26053]: I0318 09:03:38.885665 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/95143c61-6f91-4cd4-9411-31c2fb75d4d0-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-whh6r\" (UID: \"95143c61-6f91-4cd4-9411-31c2fb75d4d0\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 09:03:38.886867 master-0 kubenswrapper[26053]: I0318 09:03:38.885724 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-tuned\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:38.886867 master-0 kubenswrapper[26053]: I0318 09:03:38.885799 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 09:03:38.886867 master-0 kubenswrapper[26053]: I0318 09:03:38.885890 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-env-overrides\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 09:03:38.886867 master-0 kubenswrapper[26053]: I0318 09:03:38.885926 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cni-binary-copy\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 09:03:38.886867 master-0 kubenswrapper[26053]: I0318 09:03:38.886085 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2fcd92f-0a58-4c87-8213-715453486aca-catalog-content\") pod \"certified-operators-5x8lj\" (UID: \"f2fcd92f-0a58-4c87-8213-715453486aca\") " pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 09:03:38.886867 master-0 kubenswrapper[26053]: I0318 09:03:38.886426 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/5f827195-f68d-4bd2-865b-a1f041a5c73e-operand-assets\") pod \"cluster-olm-operator-67dcd4998-6gj8k\" (UID: \"5f827195-f68d-4bd2-865b-a1f041a5c73e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" Mar 18 09:03:38.886867 master-0 kubenswrapper[26053]: I0318 09:03:38.886665 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-config\") pod \"openshift-apiserver-operator-d65958b8-m8p9p\" (UID: \"81eefe1b-f683-4740-8fb0-0a5050f9b4a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" Mar 18 09:03:38.886867 master-0 kubenswrapper[26053]: I0318 09:03:38.886751 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/09269324-c908-474d-818f-5cd49406f1e2-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 09:03:38.887176 master-0 kubenswrapper[26053]: I0318 09:03:38.886938 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-xlfrc\" (UID: \"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" Mar 18 09:03:38.887176 master-0 kubenswrapper[26053]: I0318 09:03:38.886948 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 18 09:03:38.887176 master-0 kubenswrapper[26053]: I0318 09:03:38.887014 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4192ea44-a38c-4b70-93c3-8070da2ffe2f-metrics-tls\") pod \"dns-operator-9c5679d8f-2649q\" (UID: \"4192ea44-a38c-4b70-93c3-8070da2ffe2f\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 09:03:38.887176 master-0 kubenswrapper[26053]: I0318 09:03:38.887120 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovnkube-config\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:38.887176 master-0 kubenswrapper[26053]: I0318 09:03:38.887129 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovnkube-script-lib\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:38.887176 master-0 kubenswrapper[26053]: I0318 09:03:38.887120 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/995ec82c-b593-416a-9287-6020a484855c-catalog-content\") pod \"redhat-operators-4r6jd\" (UID: \"995ec82c-b593-416a-9287-6020a484855c\") " pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 09:03:38.887326 master-0 kubenswrapper[26053]: I0318 09:03:38.887186 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2fcd92f-0a58-4c87-8213-715453486aca-utilities\") pod \"certified-operators-5x8lj\" (UID: \"f2fcd92f-0a58-4c87-8213-715453486aca\") " pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 09:03:38.887437 master-0 kubenswrapper[26053]: I0318 09:03:38.887406 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-config\") pod \"openshift-controller-manager-operator-8c94f4649-2g6x9\" (UID: \"0f6a7f55-84bd-4ea5-8248-4cb565904c3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" Mar 18 09:03:38.887437 master-0 kubenswrapper[26053]: I0318 09:03:38.887421 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7cac1300-44c1-4a7d-8d14-efa9702ad9df-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 09:03:38.887498 master-0 kubenswrapper[26053]: I0318 09:03:38.887431 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f827195-f68d-4bd2-865b-a1f041a5c73e-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-6gj8k\" (UID: \"5f827195-f68d-4bd2-865b-a1f041a5c73e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" Mar 18 09:03:38.887869 master-0 kubenswrapper[26053]: I0318 09:03:38.887832 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-2g6x9\" (UID: \"0f6a7f55-84bd-4ea5-8248-4cb565904c3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" Mar 18 09:03:38.887906 master-0 kubenswrapper[26053]: I0318 09:03:38.887889 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-var-lib-kubelet\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:38.887941 master-0 kubenswrapper[26053]: I0318 09:03:38.887911 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8dacdedc-c6ad-40d4-afdc-59a31be417fe-ovn-node-metrics-cert\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:38.888182 master-0 kubenswrapper[26053]: I0318 09:03:38.888143 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65cff83a-8d8f-4e4f-96ef-99941c29ba53-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-pp4r9\" (UID: \"65cff83a-8d8f-4e4f-96ef-99941c29ba53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" Mar 18 09:03:38.888214 master-0 kubenswrapper[26053]: I0318 09:03:38.888115 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-cni-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:38.888243 master-0 kubenswrapper[26053]: I0318 09:03:38.888209 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/411d544f-e105-44f0-927a-f61406b3f070-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 09:03:38.888243 master-0 kubenswrapper[26053]: I0318 09:03:38.888214 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-config\") pod \"service-ca-operator-b865698dc-fhlfx\" (UID: \"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" Mar 18 09:03:38.888293 master-0 kubenswrapper[26053]: I0318 09:03:38.888237 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-socket-dir-parent\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:38.888329 master-0 kubenswrapper[26053]: I0318 09:03:38.888307 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c322813-b574-4b46-b760-208ccecd01a5-utilities\") pod \"community-operators-nfdcz\" (UID: \"1c322813-b574-4b46-b760-208ccecd01a5\") " pod="openshift-marketplace/community-operators-nfdcz" Mar 18 09:03:38.888329 master-0 kubenswrapper[26053]: I0318 09:03:38.888251 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-2g6x9\" (UID: \"0f6a7f55-84bd-4ea5-8248-4cb565904c3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" Mar 18 09:03:38.888379 master-0 kubenswrapper[26053]: I0318 09:03:38.888309 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65cff83a-8d8f-4e4f-96ef-99941c29ba53-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-pp4r9\" (UID: \"65cff83a-8d8f-4e4f-96ef-99941c29ba53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" Mar 18 09:03:38.888411 master-0 kubenswrapper[26053]: I0318 09:03:38.888374 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-cni-bin\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:38.888411 master-0 kubenswrapper[26053]: I0318 09:03:38.888391 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7a3329-a04c-4b58-9364-b907c00cbe08-metrics-tls\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 09:03:38.888411 master-0 kubenswrapper[26053]: I0318 09:03:38.888237 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/09269324-c908-474d-818f-5cd49406f1e2-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 09:03:38.888493 master-0 kubenswrapper[26053]: I0318 09:03:38.888477 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-client\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 09:03:38.888493 master-0 kubenswrapper[26053]: I0318 09:03:38.888537 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 09:03:38.888493 master-0 kubenswrapper[26053]: I0318 09:03:38.888588 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-774fx\" (UniqueName: \"kubernetes.io/projected/599418d3-6afa-46ab-9afa-659134f7ac94-kube-api-access-774fx\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 09:03:38.889492 master-0 kubenswrapper[26053]: I0318 09:03:38.888618 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95143c61-6f91-4cd4-9411-31c2fb75d4d0-serving-cert\") pod \"openshift-config-operator-95bf4f4d-whh6r\" (UID: \"95143c61-6f91-4cd4-9411-31c2fb75d4d0\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 09:03:38.889492 master-0 kubenswrapper[26053]: I0318 09:03:38.888646 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a864188-ada6-4ec2-bf9f-72dab210f0ce-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-9f7lz\" (UID: \"2a864188-ada6-4ec2-bf9f-72dab210f0ce\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-9f7lz" Mar 18 09:03:38.889492 master-0 kubenswrapper[26053]: I0318 09:03:38.888708 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-serving-cert\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 09:03:38.889492 master-0 kubenswrapper[26053]: I0318 09:03:38.888719 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-ovnkube-identity-cm\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 09:03:38.889492 master-0 kubenswrapper[26053]: I0318 09:03:38.888736 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-tmpfs\") pod \"packageserver-c8d87f55b-gsv6r\" (UID: \"50a2c23f-26af-4c7f-8ea6-996bcfe173d0\") " pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 09:03:38.889492 master-0 kubenswrapper[26053]: I0318 09:03:38.888789 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwp9m\" (UniqueName: \"kubernetes.io/projected/4e919445-81d0-4663-8941-f596d8121305-kube-api-access-kwp9m\") pod \"csi-snapshot-controller-64854d9cff-qnc62\" (UID: \"4e919445-81d0-4663-8941-f596d8121305\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" Mar 18 09:03:38.889492 master-0 kubenswrapper[26053]: I0318 09:03:38.888791 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-srv-cert\") pod \"catalog-operator-68f85b4d6c-hhn7l\" (UID: \"f6833a48-fccb-42bd-ac90-29f08d5bf7e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 09:03:38.889492 master-0 kubenswrapper[26053]: I0318 09:03:38.888810 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw5zj\" (UniqueName: \"kubernetes.io/projected/800297fe-77fd-4f58-ade2-32a147cd7d5c-kube-api-access-tw5zj\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 09:03:38.889492 master-0 kubenswrapper[26053]: I0318 09:03:38.888819 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e86268c9-7a83-4ccb-979a-feff00cb4b3e-serving-cert\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 09:03:38.889492 master-0 kubenswrapper[26053]: I0318 09:03:38.888840 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8dacdedc-c6ad-40d4-afdc-59a31be417fe-env-overrides\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:38.889492 master-0 kubenswrapper[26053]: I0318 09:03:38.888848 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7cac1300-44c1-4a7d-8d14-efa9702ad9df-env-overrides\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 09:03:38.889492 master-0 kubenswrapper[26053]: I0318 09:03:38.888934 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-tmpfs\") pod \"packageserver-c8d87f55b-gsv6r\" (UID: \"50a2c23f-26af-4c7f-8ea6-996bcfe173d0\") " pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 09:03:38.889492 master-0 kubenswrapper[26053]: I0318 09:03:38.888961 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-cnibin\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:38.889492 master-0 kubenswrapper[26053]: I0318 09:03:38.888968 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 09:03:38.889492 master-0 kubenswrapper[26053]: I0318 09:03:38.888998 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 09:03:38.889492 master-0 kubenswrapper[26053]: I0318 09:03:38.889022 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf7a3329-a04c-4b58-9364-b907c00cbe08-trusted-ca\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 09:03:38.889492 master-0 kubenswrapper[26053]: I0318 09:03:38.889037 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/14489ef7-8df3-4a3b-a137-3a78e89d425b-certs\") pod \"machine-config-server-rw7hw\" (UID: \"14489ef7-8df3-4a3b-a137-3a78e89d425b\") " pod="openshift-machine-config-operator/machine-config-server-rw7hw" Mar 18 09:03:38.889492 master-0 kubenswrapper[26053]: I0318 09:03:38.889066 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 09:03:38.889492 master-0 kubenswrapper[26053]: I0318 09:03:38.889067 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65cff83a-8d8f-4e4f-96ef-99941c29ba53-config\") pod \"kube-apiserver-operator-8b68b9d9b-pp4r9\" (UID: \"65cff83a-8d8f-4e4f-96ef-99941c29ba53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" Mar 18 09:03:38.889492 master-0 kubenswrapper[26053]: I0318 09:03:38.889242 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95143c61-6f91-4cd4-9411-31c2fb75d4d0-serving-cert\") pod \"openshift-config-operator-95bf4f4d-whh6r\" (UID: \"95143c61-6f91-4cd4-9411-31c2fb75d4d0\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 09:03:38.890026 master-0 kubenswrapper[26053]: I0318 09:03:38.889190 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb8f3615-9e89-4b51-87a2-7d168c81adf3-config\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 09:03:38.890026 master-0 kubenswrapper[26053]: I0318 09:03:38.889659 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkmb4\" (UniqueName: \"kubernetes.io/projected/1deb139f-1903-417e-835c-28abdd156cdb-kube-api-access-dkmb4\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 09:03:38.890026 master-0 kubenswrapper[26053]: I0318 09:03:38.889686 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdb52116-9c55-4464-99c8-fc2e4559996b-config\") pod \"machine-api-operator-6fbb6cf6f9-n4t2h\" (UID: \"fdb52116-9c55-4464-99c8-fc2e4559996b\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 09:03:38.890026 master-0 kubenswrapper[26053]: I0318 09:03:38.889707 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a1f2b373-0c85-4028-9089-9e9dff5d37b5-etcd-client\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:38.890026 master-0 kubenswrapper[26053]: I0318 09:03:38.889732 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9cc640bf-cb5f-4493-b47b-6ea6f524525e-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 09:03:38.890026 master-0 kubenswrapper[26053]: I0318 09:03:38.889767 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w4w9\" (UniqueName: \"kubernetes.io/projected/c00ee838-424f-482b-942f-08f0952a5ccd-kube-api-access-9w4w9\") pod \"olm-operator-5c9796789-twp27\" (UID: \"c00ee838-424f-482b-942f-08f0952a5ccd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 09:03:38.890026 master-0 kubenswrapper[26053]: I0318 09:03:38.889793 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/411d544f-e105-44f0-927a-f61406b3f070-cache\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 09:03:38.890026 master-0 kubenswrapper[26053]: I0318 09:03:38.889820 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-config\") pod \"openshift-kube-scheduler-operator-dddff6458-cpbdr\" (UID: \"0f9ba06c-7a6b-4f46-a747-80b0a0b58600\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" Mar 18 09:03:38.890026 master-0 kubenswrapper[26053]: I0318 09:03:38.889844 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-k6xp5\" (UID: \"2d0da6e3-3887-4361-8eae-e7447f9ff72c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 09:03:38.890026 master-0 kubenswrapper[26053]: I0318 09:03:38.889869 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2mwd\" (UniqueName: \"kubernetes.io/projected/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-api-access-m2mwd\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 09:03:38.890026 master-0 kubenswrapper[26053]: I0318 09:03:38.889895 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/995ec82c-b593-416a-9287-6020a484855c-utilities\") pod \"redhat-operators-4r6jd\" (UID: \"995ec82c-b593-416a-9287-6020a484855c\") " pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 09:03:38.890026 master-0 kubenswrapper[26053]: I0318 09:03:38.889920 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4jq4\" (UniqueName: \"kubernetes.io/projected/bf5fd4cc-959e-4878-82e9-b0f90dba6553-kube-api-access-r4jq4\") pod \"redhat-marketplace-2gpbt\" (UID: \"bf5fd4cc-959e-4878-82e9-b0f90dba6553\") " pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 09:03:38.890026 master-0 kubenswrapper[26053]: I0318 09:03:38.889945 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-system-cni-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:38.890026 master-0 kubenswrapper[26053]: I0318 09:03:38.889969 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-node-log\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:38.890026 master-0 kubenswrapper[26053]: I0318 09:03:38.890019 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9mh7\" (UniqueName: \"kubernetes.io/projected/600c92a1-56c5-497b-a8f0-746830f4180e-kube-api-access-m9mh7\") pod \"iptables-alerter-vr4gq\" (UID: \"600c92a1-56c5-497b-a8f0-746830f4180e\") " pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 09:03:38.890399 master-0 kubenswrapper[26053]: I0318 09:03:38.890158 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-trusted-ca-bundle\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:38.890399 master-0 kubenswrapper[26053]: I0318 09:03:38.890194 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj95l\" (UniqueName: \"kubernetes.io/projected/eb8f3615-9e89-4b51-87a2-7d168c81adf3-kube-api-access-mj95l\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 09:03:38.890399 master-0 kubenswrapper[26053]: I0318 09:03:38.890342 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/995ec82c-b593-416a-9287-6020a484855c-utilities\") pod \"redhat-operators-4r6jd\" (UID: \"995ec82c-b593-416a-9287-6020a484855c\") " pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 09:03:38.890399 master-0 kubenswrapper[26053]: I0318 09:03:38.890359 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/411d544f-e105-44f0-927a-f61406b3f070-cache\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 09:03:38.890399 master-0 kubenswrapper[26053]: I0318 09:03:38.890379 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 09:03:38.890528 master-0 kubenswrapper[26053]: I0318 09:03:38.890405 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-run-ovn-kubernetes\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:38.890556 master-0 kubenswrapper[26053]: I0318 09:03:38.890529 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-257nx\" (UniqueName: \"kubernetes.io/projected/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-kube-api-access-257nx\") pod \"catalog-operator-68f85b4d6c-hhn7l\" (UID: \"f6833a48-fccb-42bd-ac90-29f08d5bf7e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 09:03:38.890650 master-0 kubenswrapper[26053]: I0318 09:03:38.890617 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-config\") pod \"openshift-kube-scheduler-operator-dddff6458-cpbdr\" (UID: \"0f9ba06c-7a6b-4f46-a747-80b0a0b58600\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" Mar 18 09:03:38.890687 master-0 kubenswrapper[26053]: I0318 09:03:38.890665 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d0da6e3-3887-4361-8eae-e7447f9ff72c-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-k6xp5\" (UID: \"2d0da6e3-3887-4361-8eae-e7447f9ff72c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 09:03:38.890769 master-0 kubenswrapper[26053]: I0318 09:03:38.890739 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-host-etc-kube\") pod \"network-operator-7bd846bfc4-6rtpx\" (UID: \"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b\") " pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 09:03:38.890804 master-0 kubenswrapper[26053]: I0318 09:03:38.890787 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n76wp\" (UniqueName: \"kubernetes.io/projected/14489ef7-8df3-4a3b-a137-3a78e89d425b-kube-api-access-n76wp\") pod \"machine-config-server-rw7hw\" (UID: \"14489ef7-8df3-4a3b-a137-3a78e89d425b\") " pod="openshift-machine-config-operator/machine-config-server-rw7hw" Mar 18 09:03:38.890850 master-0 kubenswrapper[26053]: I0318 09:03:38.890828 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5fd4cc-959e-4878-82e9-b0f90dba6553-utilities\") pod \"redhat-marketplace-2gpbt\" (UID: \"bf5fd4cc-959e-4878-82e9-b0f90dba6553\") " pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 09:03:38.890910 master-0 kubenswrapper[26053]: I0318 09:03:38.890887 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/680006ef-a955-491e-b6a3-1ca7fcc20165-signing-key\") pod \"service-ca-79bc6b8d76-fhj95\" (UID: \"680006ef-a955-491e-b6a3-1ca7fcc20165\") " pod="openshift-service-ca/service-ca-79bc6b8d76-fhj95" Mar 18 09:03:38.890939 master-0 kubenswrapper[26053]: I0318 09:03:38.890927 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brzfx\" (UniqueName: \"kubernetes.io/projected/87381a51-96e6-4e86-bdae-c8ac3fc7a039-kube-api-access-brzfx\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:38.891077 master-0 kubenswrapper[26053]: I0318 09:03:38.891044 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5fd4cc-959e-4878-82e9-b0f90dba6553-utilities\") pod \"redhat-marketplace-2gpbt\" (UID: \"bf5fd4cc-959e-4878-82e9-b0f90dba6553\") " pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 09:03:38.891124 master-0 kubenswrapper[26053]: I0318 09:03:38.891103 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7cac1300-44c1-4a7d-8d14-efa9702ad9df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 09:03:38.891178 master-0 kubenswrapper[26053]: I0318 09:03:38.891160 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93cb5ef1-e8f1-4d11-8c93-1abf24626176-service-ca-bundle\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 09:03:38.891214 master-0 kubenswrapper[26053]: I0318 09:03:38.891202 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-system-cni-dir\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 09:03:38.891257 master-0 kubenswrapper[26053]: I0318 09:03:38.891240 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fbs4\" (UniqueName: \"kubernetes.io/projected/1c322813-b574-4b46-b760-208ccecd01a5-kube-api-access-9fbs4\") pod \"community-operators-nfdcz\" (UID: \"1c322813-b574-4b46-b760-208ccecd01a5\") " pod="openshift-marketplace/community-operators-nfdcz" Mar 18 09:03:38.891257 master-0 kubenswrapper[26053]: I0318 09:03:38.891243 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/680006ef-a955-491e-b6a3-1ca7fcc20165-signing-key\") pod \"service-ca-79bc6b8d76-fhj95\" (UID: \"680006ef-a955-491e-b6a3-1ca7fcc20165\") " pod="openshift-service-ca/service-ca-79bc6b8d76-fhj95" Mar 18 09:03:38.891310 master-0 kubenswrapper[26053]: I0318 09:03:38.891293 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxshz\" (UniqueName: \"kubernetes.io/projected/cda44dd8-895a-4eab-bedc-83f38efa2482-kube-api-access-bxshz\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:38.891385 master-0 kubenswrapper[26053]: I0318 09:03:38.891361 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-proxy-tls\") pod \"machine-config-daemon-rhm2f\" (UID: \"a7cf2cff-ca67-4cc6-99e7-99478ab89af4\") " pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 09:03:38.891499 master-0 kubenswrapper[26053]: I0318 09:03:38.891476 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-daemon-config\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:38.891550 master-0 kubenswrapper[26053]: I0318 09:03:38.891530 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47cpd\" (UniqueName: \"kubernetes.io/projected/e48101ca-f356-45e3-93d7-4e17b8d8066c-kube-api-access-47cpd\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 09:03:38.891679 master-0 kubenswrapper[26053]: I0318 09:03:38.891621 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkkcv\" (UniqueName: \"kubernetes.io/projected/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-kube-api-access-qkkcv\") pod \"openshift-apiserver-operator-d65958b8-m8p9p\" (UID: \"81eefe1b-f683-4740-8fb0-0a5050f9b4a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" Mar 18 09:03:38.891679 master-0 kubenswrapper[26053]: I0318 09:03:38.891666 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-slash\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:38.891781 master-0 kubenswrapper[26053]: I0318 09:03:38.891759 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-cpbdr\" (UID: \"0f9ba06c-7a6b-4f46-a747-80b0a0b58600\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" Mar 18 09:03:38.891812 master-0 kubenswrapper[26053]: I0318 09:03:38.891785 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-daemon-config\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:38.891839 master-0 kubenswrapper[26053]: I0318 09:03:38.891813 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd-cert\") pod \"ingress-canary-226gc\" (UID: \"9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd\") " pod="openshift-ingress-canary/ingress-canary-226gc" Mar 18 09:03:38.891910 master-0 kubenswrapper[26053]: I0318 09:03:38.891886 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-serving-cert\") pod \"service-ca-operator-b865698dc-fhlfx\" (UID: \"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" Mar 18 09:03:38.891996 master-0 kubenswrapper[26053]: I0318 09:03:38.891972 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-metrics-server-audit-profiles\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:38.892122 master-0 kubenswrapper[26053]: I0318 09:03:38.892056 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd8zs\" (UniqueName: \"kubernetes.io/projected/17b1447b-1659-405b-81e0-21f0cf3e7a2c-kube-api-access-rd8zs\") pod \"network-check-source-b4bf74f6-7zvkl\" (UID: \"17b1447b-1659-405b-81e0-21f0cf3e7a2c\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-7zvkl" Mar 18 09:03:38.892197 master-0 kubenswrapper[26053]: I0318 09:03:38.892112 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9cc640bf-cb5f-4493-b47b-6ea6f524525e-service-ca\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 09:03:38.892414 master-0 kubenswrapper[26053]: I0318 09:03:38.892222 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-etc-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:38.892414 master-0 kubenswrapper[26053]: I0318 09:03:38.892235 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-serving-cert\") pod \"service-ca-operator-b865698dc-fhlfx\" (UID: \"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" Mar 18 09:03:38.892414 master-0 kubenswrapper[26053]: I0318 09:03:38.892297 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp84d\" (UniqueName: \"kubernetes.io/projected/4192ea44-a38c-4b70-93c3-8070da2ffe2f-kube-api-access-gp84d\") pod \"dns-operator-9c5679d8f-2649q\" (UID: \"4192ea44-a38c-4b70-93c3-8070da2ffe2f\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 09:03:38.892414 master-0 kubenswrapper[26053]: I0318 09:03:38.892328 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2b59dbf5-0a61-4981-aed3-e73550615c4a-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 09:03:38.892414 master-0 kubenswrapper[26053]: I0318 09:03:38.892356 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2682e4-cb63-4102-a83e-ef28023e273a-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl\" (UID: \"be2682e4-cb63-4102-a83e-ef28023e273a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" Mar 18 09:03:38.892554 master-0 kubenswrapper[26053]: I0318 09:03:38.892431 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/87381a51-96e6-4e86-bdae-c8ac3fc7a039-audit-log\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:38.892554 master-0 kubenswrapper[26053]: I0318 09:03:38.892489 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b59dbf5-0a61-4981-aed3-e73550615c4a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 09:03:38.892554 master-0 kubenswrapper[26053]: I0318 09:03:38.892532 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cda44dd8-895a-4eab-bedc-83f38efa2482-tmp\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:38.892650 master-0 kubenswrapper[26053]: I0318 09:03:38.892593 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 09:03:38.892650 master-0 kubenswrapper[26053]: I0318 09:03:38.892623 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-sysctl-conf\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:38.892702 master-0 kubenswrapper[26053]: I0318 09:03:38.892673 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/af1fbcf2-d4de-4015-89fc-2565e855a04d-cni-binary-copy\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:38.892734 master-0 kubenswrapper[26053]: I0318 09:03:38.892704 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-cni-multus\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:38.892826 master-0 kubenswrapper[26053]: I0318 09:03:38.892730 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjv4l\" (UniqueName: \"kubernetes.io/projected/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-kube-api-access-xjv4l\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 09:03:38.892826 master-0 kubenswrapper[26053]: I0318 09:03:38.892732 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/87381a51-96e6-4e86-bdae-c8ac3fc7a039-audit-log\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:38.892826 master-0 kubenswrapper[26053]: I0318 09:03:38.892739 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 09:03:38.892826 master-0 kubenswrapper[26053]: I0318 09:03:38.892673 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2682e4-cb63-4102-a83e-ef28023e273a-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl\" (UID: \"be2682e4-cb63-4102-a83e-ef28023e273a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" Mar 18 09:03:38.892826 master-0 kubenswrapper[26053]: I0318 09:03:38.892793 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2b59dbf5-0a61-4981-aed3-e73550615c4a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 09:03:38.892957 master-0 kubenswrapper[26053]: I0318 09:03:38.892827 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cda44dd8-895a-4eab-bedc-83f38efa2482-tmp\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:38.892957 master-0 kubenswrapper[26053]: I0318 09:03:38.892870 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 09:03:38.892957 master-0 kubenswrapper[26053]: I0318 09:03:38.892886 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1deb139f-1903-417e-835c-28abdd156cdb-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 09:03:38.893909 master-0 kubenswrapper[26053]: I0318 09:03:38.892919 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxgx6\" (UniqueName: \"kubernetes.io/projected/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-kube-api-access-wxgx6\") pod \"network-operator-7bd846bfc4-6rtpx\" (UID: \"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b\") " pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 09:03:38.893972 master-0 kubenswrapper[26053]: I0318 09:03:38.893942 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-client-ca\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 09:03:38.894021 master-0 kubenswrapper[26053]: I0318 09:03:38.893999 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-m8p9p\" (UID: \"81eefe1b-f683-4740-8fb0-0a5050f9b4a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" Mar 18 09:03:38.894088 master-0 kubenswrapper[26053]: I0318 09:03:38.894058 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcfrf\" (UniqueName: \"kubernetes.io/projected/15b6612f-3a51-4a67-a566-8c520f85c6c2-kube-api-access-dcfrf\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 09:03:38.894133 master-0 kubenswrapper[26053]: I0318 09:03:38.894112 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-webhook-cert\") pod \"packageserver-c8d87f55b-gsv6r\" (UID: \"50a2c23f-26af-4c7f-8ea6-996bcfe173d0\") " pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 09:03:38.894187 master-0 kubenswrapper[26053]: I0318 09:03:38.894156 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb8f3615-9e89-4b51-87a2-7d168c81adf3-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 09:03:38.894230 master-0 kubenswrapper[26053]: I0318 09:03:38.894212 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-cpbdr\" (UID: \"0f9ba06c-7a6b-4f46-a747-80b0a0b58600\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" Mar 18 09:03:38.894442 master-0 kubenswrapper[26053]: I0318 09:03:38.894252 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-config\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:38.894477 master-0 kubenswrapper[26053]: I0318 09:03:38.894462 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 09:03:38.894505 master-0 kubenswrapper[26053]: I0318 09:03:38.894492 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf7a3329-a04c-4b58-9364-b907c00cbe08-bound-sa-token\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 09:03:38.894543 master-0 kubenswrapper[26053]: I0318 09:03:38.894523 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0cd1cf7-be6f-4baf-8761-69c693476de9-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-9xqgw\" (UID: \"a0cd1cf7-be6f-4baf-8761-69c693476de9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" Mar 18 09:03:38.894596 master-0 kubenswrapper[26053]: I0318 09:03:38.894560 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-config\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 09:03:38.894635 master-0 kubenswrapper[26053]: I0318 09:03:38.894617 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8683c8c6-3a77-4b46-8898-142f9781b49c-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-rqgh5\" (UID: \"8683c8c6-3a77-4b46-8898-142f9781b49c\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 09:03:38.894677 master-0 kubenswrapper[26053]: I0318 09:03:38.894658 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/800297fe-77fd-4f58-ade2-32a147cd7d5c-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 09:03:38.894855 master-0 kubenswrapper[26053]: I0318 09:03:38.894693 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4l97\" (UniqueName: \"kubernetes.io/projected/411d544f-e105-44f0-927a-f61406b3f070-kube-api-access-t4l97\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 09:03:38.894887 master-0 kubenswrapper[26053]: I0318 09:03:38.894867 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-cni-netd\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:38.894918 master-0 kubenswrapper[26053]: I0318 09:03:38.894899 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-tuning-conf-dir\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 09:03:38.894946 master-0 kubenswrapper[26053]: I0318 09:03:38.894932 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-mcd-auth-proxy-config\") pod \"machine-config-daemon-rhm2f\" (UID: \"a7cf2cff-ca67-4cc6-99e7-99478ab89af4\") " pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 09:03:38.895016 master-0 kubenswrapper[26053]: I0318 09:03:38.894995 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-webhook-cert\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 09:03:38.895054 master-0 kubenswrapper[26053]: I0318 09:03:38.895034 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-sysctl-d\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:38.895087 master-0 kubenswrapper[26053]: I0318 09:03:38.895067 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f918d08d-df7c-4e8d-85ba-1c92d766db16-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 09:03:38.895119 master-0 kubenswrapper[26053]: I0318 09:03:38.895102 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmztj\" (UniqueName: \"kubernetes.io/projected/be2682e4-cb63-4102-a83e-ef28023e273a-kube-api-access-nmztj\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl\" (UID: \"be2682e4-cb63-4102-a83e-ef28023e273a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" Mar 18 09:03:38.895150 master-0 kubenswrapper[26053]: I0318 09:03:38.895133 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-lib-modules\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:38.895295 master-0 kubenswrapper[26053]: I0318 09:03:38.895275 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt64s\" (UniqueName: \"kubernetes.io/projected/93cb5ef1-e8f1-4d11-8c93-1abf24626176-kube-api-access-xt64s\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 09:03:38.895363 master-0 kubenswrapper[26053]: I0318 09:03:38.895343 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g42f4\" (UniqueName: \"kubernetes.io/projected/8683c8c6-3a77-4b46-8898-142f9781b49c-kube-api-access-g42f4\") pod \"prometheus-operator-6c8df6d4b-rqgh5\" (UID: \"8683c8c6-3a77-4b46-8898-142f9781b49c\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 09:03:38.895399 master-0 kubenswrapper[26053]: I0318 09:03:38.895380 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-etc-kubernetes\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:38.895632 master-0 kubenswrapper[26053]: I0318 09:03:38.895415 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0cd1cf7-be6f-4baf-8761-69c693476de9-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-9xqgw\" (UID: \"a0cd1cf7-be6f-4baf-8761-69c693476de9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" Mar 18 09:03:38.895668 master-0 kubenswrapper[26053]: I0318 09:03:38.895650 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3898c28b-69b0-46af-b085-37e12d7d80ba-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-xjbb5\" (UID: \"3898c28b-69b0-46af-b085-37e12d7d80ba\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xjbb5" Mar 18 09:03:38.895697 master-0 kubenswrapper[26053]: I0318 09:03:38.895682 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 09:03:38.895729 master-0 kubenswrapper[26053]: I0318 09:03:38.895713 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2plvj\" (UniqueName: \"kubernetes.io/projected/bf7a3329-a04c-4b58-9364-b907c00cbe08-kube-api-access-2plvj\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 09:03:38.895921 master-0 kubenswrapper[26053]: I0318 09:03:38.895746 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2af879e-1465-40bf-bf72-30c7e89386a3-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"e2af879e-1465-40bf-bf72-30c7e89386a3\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 18 09:03:38.895965 master-0 kubenswrapper[26053]: I0318 09:03:38.895947 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bef948b9-eef4-404b-9b49-6e4a2ceea73b-images\") pod \"machine-config-operator-84d549f6d5-vj84b\" (UID: \"bef948b9-eef4-404b-9b49-6e4a2ceea73b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 09:03:38.896069 master-0 kubenswrapper[26053]: I0318 09:03:38.896051 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-serving-cert\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 09:03:38.896781 master-0 kubenswrapper[26053]: I0318 09:03:38.896753 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e86268c9-7a83-4ccb-979a-feff00cb4b3e-config\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 09:03:38.897031 master-0 kubenswrapper[26053]: I0318 09:03:38.897008 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/af1fbcf2-d4de-4015-89fc-2565e855a04d-cni-binary-copy\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:38.897031 master-0 kubenswrapper[26053]: I0318 09:03:38.897008 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-m8p9p\" (UID: \"81eefe1b-f683-4740-8fb0-0a5050f9b4a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" Mar 18 09:03:38.897085 master-0 kubenswrapper[26053]: I0318 09:03:38.897011 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-cpbdr\" (UID: \"0f9ba06c-7a6b-4f46-a747-80b0a0b58600\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" Mar 18 09:03:38.897560 master-0 kubenswrapper[26053]: I0318 09:03:38.897535 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-audit\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:38.897739 master-0 kubenswrapper[26053]: I0318 09:03:38.897714 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2mj5\" (UniqueName: \"kubernetes.io/projected/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-kube-api-access-f2mj5\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 09:03:38.897776 master-0 kubenswrapper[26053]: I0318 09:03:38.897747 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-xlfrc\" (UID: \"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" Mar 18 09:03:38.897776 master-0 kubenswrapper[26053]: I0318 09:03:38.897767 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cc640bf-cb5f-4493-b47b-6ea6f524525e-kube-api-access\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 09:03:38.897831 master-0 kubenswrapper[26053]: I0318 09:03:38.897793 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-kubernetes\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:38.897831 master-0 kubenswrapper[26053]: I0318 09:03:38.897816 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b2588f5c-327c-49cc-8cfb-0cce1ad758d5-metrics-tls\") pod \"dns-default-pj485\" (UID: \"b2588f5c-327c-49cc-8cfb-0cce1ad758d5\") " pod="openshift-dns/dns-default-pj485" Mar 18 09:03:38.897899 master-0 kubenswrapper[26053]: I0318 09:03:38.897884 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-ca\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 09:03:38.897933 master-0 kubenswrapper[26053]: I0318 09:03:38.897913 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ggjn\" (UniqueName: \"kubernetes.io/projected/a0cd1cf7-be6f-4baf-8761-69c693476de9-kube-api-access-2ggjn\") pod \"cloud-credential-operator-744f9dbf77-9xqgw\" (UID: \"a0cd1cf7-be6f-4baf-8761-69c693476de9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" Mar 18 09:03:38.897979 master-0 kubenswrapper[26053]: I0318 09:03:38.897965 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 09:03:38.898014 master-0 kubenswrapper[26053]: I0318 09:03:38.897994 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/600c92a1-56c5-497b-a8f0-746830f4180e-host-slash\") pod \"iptables-alerter-vr4gq\" (UID: \"600c92a1-56c5-497b-a8f0-746830f4180e\") " pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 09:03:38.898043 master-0 kubenswrapper[26053]: I0318 09:03:38.898021 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dn5k\" (UniqueName: \"kubernetes.io/projected/7cac1300-44c1-4a7d-8d14-efa9702ad9df-kube-api-access-7dn5k\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 09:03:38.898075 master-0 kubenswrapper[26053]: I0318 09:03:38.898043 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1deb139f-1903-417e-835c-28abdd156cdb-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 09:03:38.898075 master-0 kubenswrapper[26053]: I0318 09:03:38.898067 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqfdm\" (UniqueName: \"kubernetes.io/projected/fdd2f1fd-1a94-4f4e-a275-b075f432f763-kube-api-access-fqfdm\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 09:03:38.898197 master-0 kubenswrapper[26053]: I0318 09:03:38.898175 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/8683c8c6-3a77-4b46-8898-142f9781b49c-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-rqgh5\" (UID: \"8683c8c6-3a77-4b46-8898-142f9781b49c\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 09:03:38.898228 master-0 kubenswrapper[26053]: I0318 09:03:38.898200 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-cni-bin\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:38.898228 master-0 kubenswrapper[26053]: I0318 09:03:38.898221 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a1f2b373-0c85-4028-9089-9e9dff5d37b5-node-pullsecrets\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:38.898289 master-0 kubenswrapper[26053]: I0318 09:03:38.898253 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-config\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 09:03:38.898339 master-0 kubenswrapper[26053]: I0318 09:03:38.898323 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fdb52116-9c55-4464-99c8-fc2e4559996b-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-n4t2h\" (UID: \"fdb52116-9c55-4464-99c8-fc2e4559996b\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 09:03:38.898529 master-0 kubenswrapper[26053]: I0318 09:03:38.898512 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/411d544f-e105-44f0-927a-f61406b3f070-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 09:03:38.898952 master-0 kubenswrapper[26053]: I0318 09:03:38.898909 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-ca\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 09:03:38.899142 master-0 kubenswrapper[26053]: I0318 09:03:38.899117 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-webhook-cert\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 09:03:38.899185 master-0 kubenswrapper[26053]: I0318 09:03:38.899160 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-metrics-tls\") pod \"network-operator-7bd846bfc4-6rtpx\" (UID: \"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b\") " pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 09:03:38.899223 master-0 kubenswrapper[26053]: I0318 09:03:38.899184 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mkcq\" (UniqueName: \"kubernetes.io/projected/b2588f5c-327c-49cc-8cfb-0cce1ad758d5-kube-api-access-9mkcq\") pod \"dns-default-pj485\" (UID: \"b2588f5c-327c-49cc-8cfb-0cce1ad758d5\") " pod="openshift-dns/dns-default-pj485" Mar 18 09:03:38.899223 master-0 kubenswrapper[26053]: I0318 09:03:38.899205 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bef948b9-eef4-404b-9b49-6e4a2ceea73b-proxy-tls\") pod \"machine-config-operator-84d549f6d5-vj84b\" (UID: \"bef948b9-eef4-404b-9b49-6e4a2ceea73b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 09:03:38.899276 master-0 kubenswrapper[26053]: I0318 09:03:38.899222 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-config\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 09:03:38.899306 master-0 kubenswrapper[26053]: I0318 09:03:38.899227 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d7205eeb-912b-4c31-b08f-ed0b2a1319aa-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-prrnd\" (UID: \"d7205eeb-912b-4c31-b08f-ed0b2a1319aa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" Mar 18 09:03:38.899760 master-0 kubenswrapper[26053]: I0318 09:03:38.899728 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1deb139f-1903-417e-835c-28abdd156cdb-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 09:03:38.899847 master-0 kubenswrapper[26053]: I0318 09:03:38.899817 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr9zx\" (UniqueName: \"kubernetes.io/projected/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-kube-api-access-mr9zx\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 09:03:38.900014 master-0 kubenswrapper[26053]: I0318 09:03:38.899981 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f918d08d-df7c-4e8d-85ba-1c92d766db16-serving-cert\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 09:03:38.900116 master-0 kubenswrapper[26053]: I0318 09:03:38.900095 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-metrics-tls\") pod \"network-operator-7bd846bfc4-6rtpx\" (UID: \"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b\") " pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 09:03:38.900151 master-0 kubenswrapper[26053]: I0318 09:03:38.900103 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/411d544f-e105-44f0-927a-f61406b3f070-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 09:03:38.900212 master-0 kubenswrapper[26053]: I0318 09:03:38.900158 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/14489ef7-8df3-4a3b-a137-3a78e89d425b-node-bootstrap-token\") pod \"machine-config-server-rw7hw\" (UID: \"14489ef7-8df3-4a3b-a137-3a78e89d425b\") " pod="openshift-machine-config-operator/machine-config-server-rw7hw" Mar 18 09:03:38.900310 master-0 kubenswrapper[26053]: I0318 09:03:38.900276 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rppm6\" (UniqueName: \"kubernetes.io/projected/6c56e1ac-8752-4e46-8692-93716087f0e0-kube-api-access-rppm6\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 09:03:38.900391 master-0 kubenswrapper[26053]: I0318 09:03:38.900367 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-client-certs\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:38.900482 master-0 kubenswrapper[26053]: I0318 09:03:38.900456 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e88b021c-c810-4a68-aa48-d8666b52330e-cert\") pod \"cluster-autoscaler-operator-866dc4744-tx2pv\" (UID: \"e88b021c-c810-4a68-aa48-d8666b52330e\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" Mar 18 09:03:38.900612 master-0 kubenswrapper[26053]: I0318 09:03:38.900544 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94zpt\" (UniqueName: \"kubernetes.io/projected/09269324-c908-474d-818f-5cd49406f1e2-kube-api-access-94zpt\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 09:03:38.900719 master-0 kubenswrapper[26053]: I0318 09:03:38.900639 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwnvl\" (UniqueName: \"kubernetes.io/projected/f2fcd92f-0a58-4c87-8213-715453486aca-kube-api-access-zwnvl\") pod \"certified-operators-5x8lj\" (UID: \"f2fcd92f-0a58-4c87-8213-715453486aca\") " pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 09:03:38.900960 master-0 kubenswrapper[26053]: I0318 09:03:38.900776 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2af879e-1465-40bf-bf72-30c7e89386a3-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"e2af879e-1465-40bf-bf72-30c7e89386a3\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 18 09:03:38.900960 master-0 kubenswrapper[26053]: I0318 09:03:38.900860 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c56e1ac-8752-4e46-8692-93716087f0e0-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 09:03:38.901033 master-0 kubenswrapper[26053]: I0318 09:03:38.900960 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/680006ef-a955-491e-b6a3-1ca7fcc20165-signing-cabundle\") pod \"service-ca-79bc6b8d76-fhj95\" (UID: \"680006ef-a955-491e-b6a3-1ca7fcc20165\") " pod="openshift-service-ca/service-ca-79bc6b8d76-fhj95" Mar 18 09:03:38.901066 master-0 kubenswrapper[26053]: I0318 09:03:38.901037 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2af879e-1465-40bf-bf72-30c7e89386a3-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"e2af879e-1465-40bf-bf72-30c7e89386a3\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 18 09:03:38.901232 master-0 kubenswrapper[26053]: I0318 09:03:38.901197 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfrbj\" (UniqueName: \"kubernetes.io/projected/cdcd27a4-6d46-47af-a14a-65f6501c10f0-kube-api-access-dfrbj\") pod \"machine-approver-5c6485487f-r4mv6\" (UID: \"cdcd27a4-6d46-47af-a14a-65f6501c10f0\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 09:03:38.901281 master-0 kubenswrapper[26053]: I0318 09:03:38.901255 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-systemd\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:38.901330 master-0 kubenswrapper[26053]: I0318 09:03:38.901307 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/800297fe-77fd-4f58-ade2-32a147cd7d5c-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 09:03:38.901383 master-0 kubenswrapper[26053]: I0318 09:03:38.901360 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnzhn\" (UniqueName: \"kubernetes.io/projected/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-kube-api-access-fnzhn\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 09:03:38.901428 master-0 kubenswrapper[26053]: I0318 09:03:38.901408 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-kubelet\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:38.901474 master-0 kubenswrapper[26053]: I0318 09:03:38.901453 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmv75\" (UniqueName: \"kubernetes.io/projected/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-kube-api-access-nmv75\") pod \"service-ca-operator-b865698dc-fhlfx\" (UID: \"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" Mar 18 09:03:38.901522 master-0 kubenswrapper[26053]: I0318 09:03:38.901502 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6p7s\" (UniqueName: \"kubernetes.io/projected/f918d08d-df7c-4e8d-85ba-1c92d766db16-kube-api-access-l6p7s\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 09:03:38.901552 master-0 kubenswrapper[26053]: I0318 09:03:38.901535 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c56e1ac-8752-4e46-8692-93716087f0e0-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 09:03:38.901599 master-0 kubenswrapper[26053]: I0318 09:03:38.901548 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-config\") pod \"kube-controller-manager-operator-ff989d6cc-xlfrc\" (UID: \"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" Mar 18 09:03:38.901646 master-0 kubenswrapper[26053]: I0318 09:03:38.901620 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eb8f3615-9e89-4b51-87a2-7d168c81adf3-images\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 09:03:38.901688 master-0 kubenswrapper[26053]: I0318 09:03:38.901668 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-client-ca-bundle\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:38.901737 master-0 kubenswrapper[26053]: I0318 09:03:38.901716 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx9dd\" (UniqueName: \"kubernetes.io/projected/ca9d4694-8675-47c5-819f-89bba9dcdc0f-kube-api-access-rx9dd\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 09:03:38.901737 master-0 kubenswrapper[26053]: I0318 09:03:38.901732 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/680006ef-a955-491e-b6a3-1ca7fcc20165-signing-cabundle\") pod \"service-ca-79bc6b8d76-fhj95\" (UID: \"680006ef-a955-491e-b6a3-1ca7fcc20165\") " pod="openshift-service-ca/service-ca-79bc6b8d76-fhj95" Mar 18 09:03:38.901793 master-0 kubenswrapper[26053]: I0318 09:03:38.901762 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 09:03:38.901832 master-0 kubenswrapper[26053]: I0318 09:03:38.901810 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b7ac7ef-060f-45d2-8988-006d45402e00-client-ca\") pod \"route-controller-manager-7dbcb47f86-ptccg\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 09:03:38.901880 master-0 kubenswrapper[26053]: I0318 09:03:38.901860 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 09:03:38.901921 master-0 kubenswrapper[26053]: I0318 09:03:38.901901 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/600c92a1-56c5-497b-a8f0-746830f4180e-iptables-alerter-script\") pod \"iptables-alerter-vr4gq\" (UID: \"600c92a1-56c5-497b-a8f0-746830f4180e\") " pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 09:03:38.901969 master-0 kubenswrapper[26053]: I0318 09:03:38.901948 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d7205eeb-912b-4c31-b08f-ed0b2a1319aa-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-prrnd\" (UID: \"d7205eeb-912b-4c31-b08f-ed0b2a1319aa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" Mar 18 09:03:38.901998 master-0 kubenswrapper[26053]: I0318 09:03:38.901978 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-config\") pod \"kube-controller-manager-operator-ff989d6cc-xlfrc\" (UID: \"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" Mar 18 09:03:38.902028 master-0 kubenswrapper[26053]: I0318 09:03:38.901994 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-hostroot\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:38.902073 master-0 kubenswrapper[26053]: I0318 09:03:38.902052 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkfms\" (UniqueName: \"kubernetes.io/projected/680006ef-a955-491e-b6a3-1ca7fcc20165-kube-api-access-kkfms\") pod \"service-ca-79bc6b8d76-fhj95\" (UID: \"680006ef-a955-491e-b6a3-1ca7fcc20165\") " pod="openshift-service-ca/service-ca-79bc6b8d76-fhj95" Mar 18 09:03:38.902123 master-0 kubenswrapper[26053]: I0318 09:03:38.902100 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be2682e4-cb63-4102-a83e-ef28023e273a-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl\" (UID: \"be2682e4-cb63-4102-a83e-ef28023e273a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" Mar 18 09:03:38.902174 master-0 kubenswrapper[26053]: I0318 09:03:38.902153 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eb8f3615-9e89-4b51-87a2-7d168c81adf3-cert\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 09:03:38.902250 master-0 kubenswrapper[26053]: I0318 09:03:38.902206 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8683c8c6-3a77-4b46-8898-142f9781b49c-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-rqgh5\" (UID: \"8683c8c6-3a77-4b46-8898-142f9781b49c\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 09:03:38.902302 master-0 kubenswrapper[26053]: I0318 09:03:38.902281 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2588f5c-327c-49cc-8cfb-0cce1ad758d5-config-volume\") pod \"dns-default-pj485\" (UID: \"b2588f5c-327c-49cc-8cfb-0cce1ad758d5\") " pod="openshift-dns/dns-default-pj485" Mar 18 09:03:38.902354 master-0 kubenswrapper[26053]: I0318 09:03:38.902333 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c322813-b574-4b46-b760-208ccecd01a5-catalog-content\") pod \"community-operators-nfdcz\" (UID: \"1c322813-b574-4b46-b760-208ccecd01a5\") " pod="openshift-marketplace/community-operators-nfdcz" Mar 18 09:03:38.902398 master-0 kubenswrapper[26053]: I0318 09:03:38.902377 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e88b021c-c810-4a68-aa48-d8666b52330e-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-tx2pv\" (UID: \"e88b021c-c810-4a68-aa48-d8666b52330e\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" Mar 18 09:03:38.902445 master-0 kubenswrapper[26053]: I0318 09:03:38.902424 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-textfile\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 09:03:38.902474 master-0 kubenswrapper[26053]: I0318 09:03:38.902447 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 09:03:38.902502 master-0 kubenswrapper[26053]: I0318 09:03:38.902474 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/93cb5ef1-e8f1-4d11-8c93-1abf24626176-stats-auth\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 09:03:38.902543 master-0 kubenswrapper[26053]: I0318 09:03:38.902521 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cnibin\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 09:03:38.902608 master-0 kubenswrapper[26053]: I0318 09:03:38.902586 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/15b6612f-3a51-4a67-a566-8c520f85c6c2-etcd-serving-ca\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 09:03:38.902667 master-0 kubenswrapper[26053]: I0318 09:03:38.902642 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jndvw\" (UniqueName: \"kubernetes.io/projected/5f827195-f68d-4bd2-865b-a1f041a5c73e-kube-api-access-jndvw\") pod \"cluster-olm-operator-67dcd4998-6gj8k\" (UID: \"5f827195-f68d-4bd2-865b-a1f041a5c73e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" Mar 18 09:03:38.902712 master-0 kubenswrapper[26053]: I0318 09:03:38.902692 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-systemd\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:38.902765 master-0 kubenswrapper[26053]: I0318 09:03:38.902743 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cc640bf-cb5f-4493-b47b-6ea6f524525e-serving-cert\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 09:03:38.902818 master-0 kubenswrapper[26053]: I0318 09:03:38.902790 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q4k8\" (UniqueName: \"kubernetes.io/projected/995ec82c-b593-416a-9287-6020a484855c-kube-api-access-4q4k8\") pod \"redhat-operators-4r6jd\" (UID: \"995ec82c-b593-416a-9287-6020a484855c\") " pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 09:03:38.902866 master-0 kubenswrapper[26053]: I0318 09:03:38.902844 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5cgw\" (UniqueName: \"kubernetes.io/projected/25781967-12ce-490e-94aa-9b9722f495da-kube-api-access-z5cgw\") pod \"control-plane-machine-set-operator-6f97756bc8-s98kp\" (UID: \"25781967-12ce-490e-94aa-9b9722f495da\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-s98kp" Mar 18 09:03:38.902914 master-0 kubenswrapper[26053]: I0318 09:03:38.902894 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-k8s-cni-cncf-io\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:38.902976 master-0 kubenswrapper[26053]: I0318 09:03:38.902937 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-log-socket\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:38.903036 master-0 kubenswrapper[26053]: I0318 09:03:38.903010 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kubelet-dir\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:03:38.903183 master-0 kubenswrapper[26053]: I0318 09:03:38.903162 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c322813-b574-4b46-b760-208ccecd01a5-catalog-content\") pod \"community-operators-nfdcz\" (UID: \"1c322813-b574-4b46-b760-208ccecd01a5\") " pod="openshift-marketplace/community-operators-nfdcz" Mar 18 09:03:38.903261 master-0 kubenswrapper[26053]: I0318 09:03:38.902697 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/600c92a1-56c5-497b-a8f0-746830f4180e-iptables-alerter-script\") pod \"iptables-alerter-vr4gq\" (UID: \"600c92a1-56c5-497b-a8f0-746830f4180e\") " pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 09:03:38.903339 master-0 kubenswrapper[26053]: I0318 09:03:38.903319 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-textfile\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 09:03:38.903513 master-0 kubenswrapper[26053]: I0318 09:03:38.903493 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c56e1ac-8752-4e46-8692-93716087f0e0-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 09:03:38.903547 master-0 kubenswrapper[26053]: I0318 09:03:38.903023 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be2682e4-cb63-4102-a83e-ef28023e273a-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl\" (UID: \"be2682e4-cb63-4102-a83e-ef28023e273a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" Mar 18 09:03:38.903594 master-0 kubenswrapper[26053]: I0318 09:03:38.903541 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 09:03:38.903727 master-0 kubenswrapper[26053]: I0318 09:03:38.903697 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cdcd27a4-6d46-47af-a14a-65f6501c10f0-auth-proxy-config\") pod \"machine-approver-5c6485487f-r4mv6\" (UID: \"cdcd27a4-6d46-47af-a14a-65f6501c10f0\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 09:03:38.903782 master-0 kubenswrapper[26053]: I0318 09:03:38.903760 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/f918d08d-df7c-4e8d-85ba-1c92d766db16-snapshots\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 09:03:38.903834 master-0 kubenswrapper[26053]: I0318 09:03:38.903813 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/599418d3-6afa-46ab-9afa-659134f7ac94-sys\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 09:03:38.903880 master-0 kubenswrapper[26053]: I0318 09:03:38.903857 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:38.905602 master-0 kubenswrapper[26053]: I0318 09:03:38.903910 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-proxy-ca-bundles\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 09:03:38.905602 master-0 kubenswrapper[26053]: I0318 09:03:38.903817 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ca9d4694-8675-47c5-819f-89bba9dcdc0f-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 09:03:38.905602 master-0 kubenswrapper[26053]: I0318 09:03:38.904345 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/f918d08d-df7c-4e8d-85ba-1c92d766db16-snapshots\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 09:03:38.905602 master-0 kubenswrapper[26053]: I0318 09:03:38.904406 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 09:03:38.905602 master-0 kubenswrapper[26053]: I0318 09:03:38.904445 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 09:03:38.905602 master-0 kubenswrapper[26053]: I0318 09:03:38.904481 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5fd4cc-959e-4878-82e9-b0f90dba6553-catalog-content\") pod \"redhat-marketplace-2gpbt\" (UID: \"bf5fd4cc-959e-4878-82e9-b0f90dba6553\") " pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 09:03:38.905602 master-0 kubenswrapper[26053]: I0318 09:03:38.904520 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jcqf\" (UniqueName: \"kubernetes.io/projected/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-kube-api-access-2jcqf\") pod \"packageserver-c8d87f55b-gsv6r\" (UID: \"50a2c23f-26af-4c7f-8ea6-996bcfe173d0\") " pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 09:03:38.905602 master-0 kubenswrapper[26053]: I0318 09:03:38.904548 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t9rq\" (UniqueName: \"kubernetes.io/projected/95143c61-6f91-4cd4-9411-31c2fb75d4d0-kube-api-access-8t9rq\") pod \"openshift-config-operator-95bf4f4d-whh6r\" (UID: \"95143c61-6f91-4cd4-9411-31c2fb75d4d0\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 09:03:38.905602 master-0 kubenswrapper[26053]: I0318 09:03:38.904635 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g97kq\" (UniqueName: \"kubernetes.io/projected/8dacdedc-c6ad-40d4-afdc-59a31be417fe-kube-api-access-g97kq\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:38.905602 master-0 kubenswrapper[26053]: I0318 09:03:38.904968 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5fd4cc-959e-4878-82e9-b0f90dba6553-catalog-content\") pod \"redhat-marketplace-2gpbt\" (UID: \"bf5fd4cc-959e-4878-82e9-b0f90dba6553\") " pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 09:03:38.910414 master-0 kubenswrapper[26053]: I0318 09:03:38.909067 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 09:03:38.910414 master-0 kubenswrapper[26053]: I0318 09:03:38.909516 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-serving-cert\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 09:03:38.932678 master-0 kubenswrapper[26053]: I0318 09:03:38.932632 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 09:03:38.941539 master-0 kubenswrapper[26053]: I0318 09:03:38.941498 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-trusted-ca-bundle\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:38.953555 master-0 kubenswrapper[26053]: I0318 09:03:38.946884 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 09:03:38.953555 master-0 kubenswrapper[26053]: I0318 09:03:38.950534 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a1f2b373-0c85-4028-9089-9e9dff5d37b5-etcd-client\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:38.973899 master-0 kubenswrapper[26053]: I0318 09:03:38.973764 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 09:03:38.979603 master-0 kubenswrapper[26053]: I0318 09:03:38.978344 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7cac1300-44c1-4a7d-8d14-efa9702ad9df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 09:03:38.987272 master-0 kubenswrapper[26053]: I0318 09:03:38.987220 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 09:03:38.989724 master-0 kubenswrapper[26053]: I0318 09:03:38.989524 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f2b373-0c85-4028-9089-9e9dff5d37b5-serving-cert\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:39.005911 master-0 kubenswrapper[26053]: I0318 09:03:39.005465 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-etc-kubernetes\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.005911 master-0 kubenswrapper[26053]: I0318 09:03:39.005558 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-lib-modules\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:39.005911 master-0 kubenswrapper[26053]: I0318 09:03:39.005705 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-kubernetes\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:39.005911 master-0 kubenswrapper[26053]: I0318 09:03:39.005757 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/600c92a1-56c5-497b-a8f0-746830f4180e-host-slash\") pod \"iptables-alerter-vr4gq\" (UID: \"600c92a1-56c5-497b-a8f0-746830f4180e\") " pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 09:03:39.005911 master-0 kubenswrapper[26053]: I0318 09:03:39.005782 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-cni-bin\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.005911 master-0 kubenswrapper[26053]: I0318 09:03:39.005806 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a1f2b373-0c85-4028-9089-9e9dff5d37b5-node-pullsecrets\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:39.006227 master-0 kubenswrapper[26053]: I0318 09:03:39.005973 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2af879e-1465-40bf-bf72-30c7e89386a3-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"e2af879e-1465-40bf-bf72-30c7e89386a3\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 18 09:03:39.006227 master-0 kubenswrapper[26053]: I0318 09:03:39.006010 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-systemd\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.006227 master-0 kubenswrapper[26053]: I0318 09:03:39.006037 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2af879e-1465-40bf-bf72-30c7e89386a3-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"e2af879e-1465-40bf-bf72-30c7e89386a3\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 18 09:03:39.006227 master-0 kubenswrapper[26053]: I0318 09:03:39.006061 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/800297fe-77fd-4f58-ade2-32a147cd7d5c-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 09:03:39.006227 master-0 kubenswrapper[26053]: I0318 09:03:39.006100 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-kubelet\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.006227 master-0 kubenswrapper[26053]: I0318 09:03:39.006213 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-hostroot\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.006377 master-0 kubenswrapper[26053]: I0318 09:03:39.006312 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-systemd\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:39.006377 master-0 kubenswrapper[26053]: I0318 09:03:39.006345 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cnibin\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 09:03:39.006429 master-0 kubenswrapper[26053]: I0318 09:03:39.006412 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-k8s-cni-cncf-io\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.006463 master-0 kubenswrapper[26053]: I0318 09:03:39.006435 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-log-socket\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.006463 master-0 kubenswrapper[26053]: I0318 09:03:39.006459 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kubelet-dir\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:03:39.006535 master-0 kubenswrapper[26053]: I0318 09:03:39.006511 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/599418d3-6afa-46ab-9afa-659134f7ac94-sys\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 09:03:39.006603 master-0 kubenswrapper[26053]: I0318 09:03:39.006541 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.006635 master-0 kubenswrapper[26053]: I0318 09:03:39.006621 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-sysconfig\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:39.006673 master-0 kubenswrapper[26053]: I0318 09:03:39.006647 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-ovn\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.006701 master-0 kubenswrapper[26053]: I0318 09:03:39.006678 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-modprobe-d\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:39.006729 master-0 kubenswrapper[26053]: I0318 09:03:39.006712 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/599418d3-6afa-46ab-9afa-659134f7ac94-root\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 09:03:39.006758 master-0 kubenswrapper[26053]: I0318 09:03:39.006742 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-os-release\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.006825 master-0 kubenswrapper[26053]: I0318 09:03:39.006794 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1f2b373-0c85-4028-9089-9e9dff5d37b5-audit-dir\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:39.006872 master-0 kubenswrapper[26053]: I0318 09:03:39.006854 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-wtmp\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 09:03:39.006902 master-0 kubenswrapper[26053]: I0318 09:03:39.006883 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-run\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:39.006950 master-0 kubenswrapper[26053]: I0318 09:03:39.006933 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-multus-certs\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.007014 master-0 kubenswrapper[26053]: I0318 09:03:39.006988 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9cc640bf-cb5f-4493-b47b-6ea6f524525e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 09:03:39.007092 master-0 kubenswrapper[26053]: I0318 09:03:39.007067 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/15b6612f-3a51-4a67-a566-8c520f85c6c2-audit-dir\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 09:03:39.007410 master-0 kubenswrapper[26053]: I0318 09:03:39.007136 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-host\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:39.007410 master-0 kubenswrapper[26053]: I0318 09:03:39.007224 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/411d544f-e105-44f0-927a-f61406b3f070-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 09:03:39.007410 master-0 kubenswrapper[26053]: I0318 09:03:39.007266 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-rootfs\") pod \"machine-config-daemon-rhm2f\" (UID: \"a7cf2cff-ca67-4cc6-99e7-99478ab89af4\") " pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 09:03:39.007410 master-0 kubenswrapper[26053]: I0318 09:03:39.007291 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-netns\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.007410 master-0 kubenswrapper[26053]: I0318 09:03:39.007324 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-var-lib-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.007410 master-0 kubenswrapper[26053]: I0318 09:03:39.007366 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-var-lock\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:03:39.007410 master-0 kubenswrapper[26053]: I0318 09:03:39.007391 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 09:03:39.007410 master-0 kubenswrapper[26053]: I0318 09:03:39.007414 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-run-netns\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.007657 master-0 kubenswrapper[26053]: I0318 09:03:39.007448 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c5e43736-33c3-4949-98ca-971332541d64-hosts-file\") pod \"node-resolver-thqlt\" (UID: \"c5e43736-33c3-4949-98ca-971332541d64\") " pod="openshift-dns/node-resolver-thqlt" Mar 18 09:03:39.007657 master-0 kubenswrapper[26053]: I0318 09:03:39.007483 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-kubelet\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.007657 master-0 kubenswrapper[26053]: I0318 09:03:39.007515 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-os-release\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 09:03:39.007657 master-0 kubenswrapper[26053]: I0318 09:03:39.007558 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-systemd-units\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.007657 master-0 kubenswrapper[26053]: I0318 09:03:39.007600 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:03:39.007657 master-0 kubenswrapper[26053]: I0318 09:03:39.007632 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.007817 master-0 kubenswrapper[26053]: I0318 09:03:39.007681 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/411d544f-e105-44f0-927a-f61406b3f070-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 09:03:39.007817 master-0 kubenswrapper[26053]: I0318 09:03:39.007720 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-sys\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:39.007817 master-0 kubenswrapper[26053]: I0318 09:03:39.007745 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-conf-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.007817 master-0 kubenswrapper[26053]: I0318 09:03:39.007779 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-cni-bin\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.007933 master-0 kubenswrapper[26053]: I0318 09:03:39.007819 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-var-lib-kubelet\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:39.007933 master-0 kubenswrapper[26053]: I0318 09:03:39.007845 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-cni-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.007933 master-0 kubenswrapper[26053]: I0318 09:03:39.007869 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-socket-dir-parent\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.008015 master-0 kubenswrapper[26053]: I0318 09:03:39.007941 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-cnibin\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.008015 master-0 kubenswrapper[26053]: I0318 09:03:39.007972 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9cc640bf-cb5f-4493-b47b-6ea6f524525e-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 09:03:39.008065 master-0 kubenswrapper[26053]: I0318 09:03:39.008038 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-system-cni-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.008096 master-0 kubenswrapper[26053]: I0318 09:03:39.008064 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-node-log\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.008124 master-0 kubenswrapper[26053]: I0318 09:03:39.008096 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-host-etc-kube\") pod \"network-operator-7bd846bfc4-6rtpx\" (UID: \"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b\") " pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 09:03:39.008152 master-0 kubenswrapper[26053]: I0318 09:03:39.008122 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-run-ovn-kubernetes\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.008183 master-0 kubenswrapper[26053]: I0318 09:03:39.008166 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-system-cni-dir\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 09:03:39.008289 master-0 kubenswrapper[26053]: I0318 09:03:39.008262 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-slash\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.008325 master-0 kubenswrapper[26053]: I0318 09:03:39.008305 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-etc-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.008402 master-0 kubenswrapper[26053]: I0318 09:03:39.008384 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-sysctl-conf\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:39.008430 master-0 kubenswrapper[26053]: I0318 09:03:39.008414 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-cni-multus\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.008544 master-0 kubenswrapper[26053]: I0318 09:03:39.008527 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/800297fe-77fd-4f58-ade2-32a147cd7d5c-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 09:03:39.008637 master-0 kubenswrapper[26053]: I0318 09:03:39.008581 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-cni-netd\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.008637 master-0 kubenswrapper[26053]: I0318 09:03:39.008606 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-sysctl-d\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:39.008702 master-0 kubenswrapper[26053]: I0318 09:03:39.008640 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-tuning-conf-dir\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 09:03:39.008773 master-0 kubenswrapper[26053]: I0318 09:03:39.008754 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-etc-kubernetes\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009207 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009277 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-run-netns\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009324 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c5e43736-33c3-4949-98ca-971332541d64-hosts-file\") pod \"node-resolver-thqlt\" (UID: \"c5e43736-33c3-4949-98ca-971332541d64\") " pod="openshift-dns/node-resolver-thqlt" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009349 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-kubelet\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009384 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-lib-modules\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009390 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-os-release\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009440 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/411d544f-e105-44f0-927a-f61406b3f070-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009453 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-node-log\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009463 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009493 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-socket-dir-parent\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009522 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-cni-bin\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009542 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-cnibin\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009593 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/9cc640bf-cb5f-4493-b47b-6ea6f524525e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009598 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-modprobe-d\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009615 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2af879e-1465-40bf-bf72-30c7e89386a3-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"e2af879e-1465-40bf-bf72-30c7e89386a3\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009624 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/599418d3-6afa-46ab-9afa-659134f7ac94-root\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009634 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-sys\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009646 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-systemd-units\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009657 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-kubernetes\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009668 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/600c92a1-56c5-497b-a8f0-746830f4180e-host-slash\") pod \"iptables-alerter-vr4gq\" (UID: \"600c92a1-56c5-497b-a8f0-746830f4180e\") " pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009683 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a1f2b373-0c85-4028-9089-9e9dff5d37b5-node-pullsecrets\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009685 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-var-lib-kubelet\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009688 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-conf-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009704 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2af879e-1465-40bf-bf72-30c7e89386a3-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"e2af879e-1465-40bf-bf72-30c7e89386a3\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009710 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-kubelet\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009730 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-cni-netd\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009732 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-hostroot\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009725 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a1f2b373-0c85-4028-9089-9e9dff5d37b5-audit-dir\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009742 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/9cc640bf-cb5f-4493-b47b-6ea6f524525e-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009752 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-systemd\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009768 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-host-etc-kube\") pod \"network-operator-7bd846bfc4-6rtpx\" (UID: \"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b\") " pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009769 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-wtmp\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009786 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-multus-cni-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009788 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-run-ovn-kubernetes\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009805 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-run\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009810 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-system-cni-dir\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009829 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cnibin\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009833 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-host-slash\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009840 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-rootfs\") pod \"machine-config-daemon-rhm2f\" (UID: \"a7cf2cff-ca67-4cc6-99e7-99478ab89af4\") " pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009853 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-etc-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009863 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/15b6612f-3a51-4a67-a566-8c520f85c6c2-audit-dir\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009866 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kubelet-dir\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009879 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-log-socket\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009885 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/599418d3-6afa-46ab-9afa-659134f7ac94-sys\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009900 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009913 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-os-release\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009923 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-sysconfig\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009935 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/800297fe-77fd-4f58-ade2-32a147cd7d5c-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009940 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-ovn\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009949 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-sysctl-conf\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009951 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-multus-certs\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009966 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-system-cni-dir\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009968 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-cni-multus\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009976 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-k8s-cni-cncf-io\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.009985 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-host\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.010003 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/cda44dd8-895a-4eab-bedc-83f38efa2482-etc-sysctl-d\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.010021 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/800297fe-77fd-4f58-ade2-32a147cd7d5c-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.010039 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fdd2f1fd-1a94-4f4e-a275-b075f432f763-tuning-conf-dir\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.010046 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-var-lib-openvswitch\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.010050 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/411d544f-e105-44f0-927a-f61406b3f070-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.010063 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-run-netns\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.010163 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-var-lock\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.010291 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/af1fbcf2-d4de-4015-89fc-2565e855a04d-host-var-lib-cni-bin\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:39.011656 master-0 kubenswrapper[26053]: I0318 09:03:39.010551 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8dacdedc-c6ad-40d4-afdc-59a31be417fe-run-systemd\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:39.013401 master-0 kubenswrapper[26053]: I0318 09:03:39.013365 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 09:03:39.031005 master-0 kubenswrapper[26053]: I0318 09:03:39.030946 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 18 09:03:39.038983 master-0 kubenswrapper[26053]: I0318 09:03:39.038856 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a1f2b373-0c85-4028-9089-9e9dff5d37b5-encryption-config\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:39.048683 master-0 kubenswrapper[26053]: I0318 09:03:39.047954 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 09:03:39.068079 master-0 kubenswrapper[26053]: I0318 09:03:39.068023 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 09:03:39.079252 master-0 kubenswrapper[26053]: I0318 09:03:39.077642 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-config\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:39.088448 master-0 kubenswrapper[26053]: I0318 09:03:39.088309 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 09:03:39.096470 master-0 kubenswrapper[26053]: I0318 09:03:39.096011 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-etcd-serving-ca\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:39.107510 master-0 kubenswrapper[26053]: I0318 09:03:39.107468 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 09:03:39.119511 master-0 kubenswrapper[26053]: I0318 09:03:39.117994 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a1f2b373-0c85-4028-9089-9e9dff5d37b5-image-import-ca\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:39.142682 master-0 kubenswrapper[26053]: I0318 09:03:39.131870 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 09:03:39.143890 master-0 kubenswrapper[26053]: I0318 09:03:39.143848 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e48101ca-f356-45e3-93d7-4e17b8d8066c-metrics-certs\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 09:03:39.150451 master-0 kubenswrapper[26053]: I0318 09:03:39.150392 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 09:03:39.159012 master-0 kubenswrapper[26053]: I0318 09:03:39.158966 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-25rbq\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 09:03:39.170113 master-0 kubenswrapper[26053]: I0318 09:03:39.169235 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 18 09:03:39.178910 master-0 kubenswrapper[26053]: I0318 09:03:39.178871 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/15b6612f-3a51-4a67-a566-8c520f85c6c2-audit-policies\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 09:03:39.187866 master-0 kubenswrapper[26053]: I0318 09:03:39.187835 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 18 09:03:39.196940 master-0 kubenswrapper[26053]: I0318 09:03:39.196903 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-etcd-client\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 09:03:39.208138 master-0 kubenswrapper[26053]: I0318 09:03:39.208101 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 18 09:03:39.217835 master-0 kubenswrapper[26053]: I0318 09:03:39.217784 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-serving-cert\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 09:03:39.227689 master-0 kubenswrapper[26053]: I0318 09:03:39.227605 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 18 09:03:39.237911 master-0 kubenswrapper[26053]: I0318 09:03:39.237873 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/15b6612f-3a51-4a67-a566-8c520f85c6c2-encryption-config\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 09:03:39.247656 master-0 kubenswrapper[26053]: I0318 09:03:39.247626 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 18 09:03:39.267681 master-0 kubenswrapper[26053]: I0318 09:03:39.267632 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 18 09:03:39.287332 master-0 kubenswrapper[26053]: I0318 09:03:39.287286 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 18 09:03:39.294193 master-0 kubenswrapper[26053]: I0318 09:03:39.294160 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/15b6612f-3a51-4a67-a566-8c520f85c6c2-etcd-serving-ca\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 09:03:39.308276 master-0 kubenswrapper[26053]: I0318 09:03:39.308233 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 09:03:39.309527 master-0 kubenswrapper[26053]: I0318 09:03:39.309485 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 09:03:39.327618 master-0 kubenswrapper[26053]: I0318 09:03:39.327555 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 18 09:03:39.328914 master-0 kubenswrapper[26053]: I0318 09:03:39.328868 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15b6612f-3a51-4a67-a566-8c520f85c6c2-trusted-ca-bundle\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 09:03:39.348329 master-0 kubenswrapper[26053]: I0318 09:03:39.348267 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 09:03:39.358859 master-0 kubenswrapper[26053]: I0318 09:03:39.358781 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/fdd2f1fd-1a94-4f4e-a275-b075f432f763-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 09:03:39.363081 master-0 kubenswrapper[26053]: I0318 09:03:39.363031 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_ac3507630eeeca1ec26dca5ed036e3bb/kube-apiserver-check-endpoints/0.log" Mar 18 09:03:39.365286 master-0 kubenswrapper[26053]: I0318 09:03:39.365121 26053 generic.go:334] "Generic (PLEG): container finished" podID="ac3507630eeeca1ec26dca5ed036e3bb" containerID="b98c563bab7682462c40e7da7e26ff18216a7a69aec7a61033377ca04547a6d0" exitCode=255 Mar 18 09:03:39.366034 master-0 kubenswrapper[26053]: I0318 09:03:39.365546 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:03:39.367608 master-0 kubenswrapper[26053]: I0318 09:03:39.367417 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 18 09:03:39.376072 master-0 kubenswrapper[26053]: I0318 09:03:39.375995 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:03:39.393147 master-0 kubenswrapper[26053]: I0318 09:03:39.392981 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 18 09:03:39.406962 master-0 kubenswrapper[26053]: I0318 09:03:39.406908 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 18 09:03:39.409188 master-0 kubenswrapper[26053]: I0318 09:03:39.409158 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b2588f5c-327c-49cc-8cfb-0cce1ad758d5-metrics-tls\") pod \"dns-default-pj485\" (UID: \"b2588f5c-327c-49cc-8cfb-0cce1ad758d5\") " pod="openshift-dns/dns-default-pj485" Mar 18 09:03:39.416040 master-0 kubenswrapper[26053]: I0318 09:03:39.416005 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-var-lock\") pod \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " Mar 18 09:03:39.416292 master-0 kubenswrapper[26053]: I0318 09:03:39.416082 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-var-lock" (OuterVolumeSpecName: "var-lock") pod "c46fcf39-9167-4ec2-9d2c-0a622bc69d13" (UID: "c46fcf39-9167-4ec2-9d2c-0a622bc69d13"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:03:39.416333 master-0 kubenswrapper[26053]: I0318 09:03:39.416253 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kubelet-dir\") pod \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " Mar 18 09:03:39.416396 master-0 kubenswrapper[26053]: I0318 09:03:39.416381 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c46fcf39-9167-4ec2-9d2c-0a622bc69d13" (UID: "c46fcf39-9167-4ec2-9d2c-0a622bc69d13"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:03:39.418107 master-0 kubenswrapper[26053]: I0318 09:03:39.418081 26053 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:39.418180 master-0 kubenswrapper[26053]: I0318 09:03:39.418113 26053 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:03:39.426592 master-0 kubenswrapper[26053]: I0318 09:03:39.426536 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 18 09:03:39.434377 master-0 kubenswrapper[26053]: I0318 09:03:39.434328 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2588f5c-327c-49cc-8cfb-0cce1ad758d5-config-volume\") pod \"dns-default-pj485\" (UID: \"b2588f5c-327c-49cc-8cfb-0cce1ad758d5\") " pod="openshift-dns/dns-default-pj485" Mar 18 09:03:39.450181 master-0 kubenswrapper[26053]: I0318 09:03:39.450137 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 18 09:03:39.467147 master-0 kubenswrapper[26053]: I0318 09:03:39.467097 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 18 09:03:39.517366 master-0 kubenswrapper[26053]: I0318 09:03:39.517267 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-kg24z" Mar 18 09:03:39.526771 master-0 kubenswrapper[26053]: I0318 09:03:39.526720 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 09:03:39.533658 master-0 kubenswrapper[26053]: I0318 09:03:39.533620 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9cc640bf-cb5f-4493-b47b-6ea6f524525e-serving-cert\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 09:03:39.547953 master-0 kubenswrapper[26053]: I0318 09:03:39.546828 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 09:03:39.567736 master-0 kubenswrapper[26053]: I0318 09:03:39.567695 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 09:03:39.572503 master-0 kubenswrapper[26053]: I0318 09:03:39.572478 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9cc640bf-cb5f-4493-b47b-6ea6f524525e-service-ca\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 09:03:39.587857 master-0 kubenswrapper[26053]: I0318 09:03:39.587801 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 18 09:03:39.598344 master-0 kubenswrapper[26053]: I0318 09:03:39.598278 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/93cb5ef1-e8f1-4d11-8c93-1abf24626176-metrics-certs\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 09:03:39.609273 master-0 kubenswrapper[26053]: I0318 09:03:39.609225 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 18 09:03:39.611809 master-0 kubenswrapper[26053]: I0318 09:03:39.611768 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93cb5ef1-e8f1-4d11-8c93-1abf24626176-service-ca-bundle\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 09:03:39.627648 master-0 kubenswrapper[26053]: I0318 09:03:39.627605 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 18 09:03:39.628084 master-0 kubenswrapper[26053]: I0318 09:03:39.628055 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/93cb5ef1-e8f1-4d11-8c93-1abf24626176-default-certificate\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 09:03:39.646694 master-0 kubenswrapper[26053]: I0318 09:03:39.646647 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 18 09:03:39.654854 master-0 kubenswrapper[26053]: I0318 09:03:39.654814 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/93cb5ef1-e8f1-4d11-8c93-1abf24626176-stats-auth\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 09:03:39.669987 master-0 kubenswrapper[26053]: I0318 09:03:39.669942 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 18 09:03:39.689283 master-0 kubenswrapper[26053]: I0318 09:03:39.689237 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 18 09:03:39.706756 master-0 kubenswrapper[26053]: I0318 09:03:39.706713 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-lgw5q" Mar 18 09:03:39.726983 master-0 kubenswrapper[26053]: I0318 09:03:39.726932 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-mn6mb" Mar 18 09:03:39.746604 master-0 kubenswrapper[26053]: I0318 09:03:39.746546 26053 request.go:700] Waited for 1.005248233s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcertified-operators-dockercfg-l7k6v&limit=500&resourceVersion=0 Mar 18 09:03:39.747575 master-0 kubenswrapper[26053]: I0318 09:03:39.747537 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-l7k6v" Mar 18 09:03:39.766989 master-0 kubenswrapper[26053]: I0318 09:03:39.766931 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-zlc9x" Mar 18 09:03:39.787858 master-0 kubenswrapper[26053]: I0318 09:03:39.787752 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 18 09:03:39.790102 master-0 kubenswrapper[26053]: I0318 09:03:39.790063 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/2a864188-ada6-4ec2-bf9f-72dab210f0ce-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-9f7lz\" (UID: \"2a864188-ada6-4ec2-bf9f-72dab210f0ce\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-9f7lz" Mar 18 09:03:39.808988 master-0 kubenswrapper[26053]: I0318 09:03:39.808942 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 09:03:39.827330 master-0 kubenswrapper[26053]: I0318 09:03:39.827270 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 18 09:03:39.836977 master-0 kubenswrapper[26053]: I0318 09:03:39.836929 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f918d08d-df7c-4e8d-85ba-1c92d766db16-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 09:03:39.847621 master-0 kubenswrapper[26053]: I0318 09:03:39.847559 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 09:03:39.853420 master-0 kubenswrapper[26053]: I0318 09:03:39.853369 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eb8f3615-9e89-4b51-87a2-7d168c81adf3-cert\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 09:03:39.866493 master-0 kubenswrapper[26053]: I0318 09:03:39.866458 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 09:03:39.867665 master-0 kubenswrapper[26053]: I0318 09:03:39.867629 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb8f3615-9e89-4b51-87a2-7d168c81adf3-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 09:03:39.886138 master-0 kubenswrapper[26053]: E0318 09:03:39.886094 26053 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.886315 master-0 kubenswrapper[26053]: E0318 09:03:39.886138 26053 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.886315 master-0 kubenswrapper[26053]: E0318 09:03:39.886160 26053 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.886315 master-0 kubenswrapper[26053]: E0318 09:03:39.886207 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdcd27a4-6d46-47af-a14a-65f6501c10f0-machine-approver-tls podName:cdcd27a4-6d46-47af-a14a-65f6501c10f0 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.386185311 +0000 UTC m=+7.879536692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/cdcd27a4-6d46-47af-a14a-65f6501c10f0-machine-approver-tls") pod "machine-approver-5c6485487f-r4mv6" (UID: "cdcd27a4-6d46-47af-a14a-65f6501c10f0") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.886315 master-0 kubenswrapper[26053]: E0318 09:03:39.886221 26053 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.886315 master-0 kubenswrapper[26053]: E0318 09:03:39.886230 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cdf1c657-a9dc-455a-b2fd-27a518bc5199-tls-certificates podName:cdf1c657-a9dc-455a-b2fd-27a518bc5199 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.386220402 +0000 UTC m=+7.879571903 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/cdf1c657-a9dc-455a-b2fd-27a518bc5199-tls-certificates") pod "prometheus-operator-admission-webhook-69c6b55594-4jrzp" (UID: "cdf1c657-a9dc-455a-b2fd-27a518bc5199") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.886315 master-0 kubenswrapper[26053]: E0318 09:03:39.886288 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-config podName:6e869b45-8ca6-485f-8b6f-b2fad3b02efe nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.386266213 +0000 UTC m=+7.879617604 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-config") pod "controller-manager-7d954fcfb-gpddv" (UID: "6e869b45-8ca6-485f-8b6f-b2fad3b02efe") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.886315 master-0 kubenswrapper[26053]: E0318 09:03:39.886314 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f918d08d-df7c-4e8d-85ba-1c92d766db16-service-ca-bundle podName:f918d08d-df7c-4e8d-85ba-1c92d766db16 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.386299364 +0000 UTC m=+7.879650755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/f918d08d-df7c-4e8d-85ba-1c92d766db16-service-ca-bundle") pod "insights-operator-68bf6ff9d6-89rtc" (UID: "f918d08d-df7c-4e8d-85ba-1c92d766db16") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.887060 master-0 kubenswrapper[26053]: E0318 09:03:39.887028 26053 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.887060 master-0 kubenswrapper[26053]: E0318 09:03:39.887060 26053 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.887171 master-0 kubenswrapper[26053]: E0318 09:03:39.887092 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b7ac7ef-060f-45d2-8988-006d45402e00-config podName:7b7ac7ef-060f-45d2-8988-006d45402e00 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.387080064 +0000 UTC m=+7.880431445 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/7b7ac7ef-060f-45d2-8988-006d45402e00-config") pod "route-controller-manager-7dbcb47f86-ptccg" (UID: "7b7ac7ef-060f-45d2-8988-006d45402e00") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.887171 master-0 kubenswrapper[26053]: E0318 09:03:39.887110 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-configmap-kubelet-serving-ca-bundle podName:87381a51-96e6-4e86-bdae-c8ac3fc7a039 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.387103105 +0000 UTC m=+7.880454486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-configmap-kubelet-serving-ca-bundle") pod "metrics-server-7875f64c8-kmr8t" (UID: "87381a51-96e6-4e86-bdae-c8ac3fc7a039") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.887322 master-0 kubenswrapper[26053]: E0318 09:03:39.887297 26053 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.887376 master-0 kubenswrapper[26053]: E0318 09:03:39.887336 26053 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.887376 master-0 kubenswrapper[26053]: E0318 09:03:39.887345 26053 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.887475 master-0 kubenswrapper[26053]: E0318 09:03:39.887349 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-server-tls podName:87381a51-96e6-4e86-bdae-c8ac3fc7a039 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.387332341 +0000 UTC m=+7.880683722 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-server-tls") pod "metrics-server-7875f64c8-kmr8t" (UID: "87381a51-96e6-4e86-bdae-c8ac3fc7a039") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.887475 master-0 kubenswrapper[26053]: E0318 09:03:39.887413 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bef948b9-eef4-404b-9b49-6e4a2ceea73b-auth-proxy-config podName:bef948b9-eef4-404b-9b49-6e4a2ceea73b nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.387398662 +0000 UTC m=+7.880750053 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/bef948b9-eef4-404b-9b49-6e4a2ceea73b-auth-proxy-config") pod "machine-config-operator-84d549f6d5-vj84b" (UID: "bef948b9-eef4-404b-9b49-6e4a2ceea73b") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.887562 master-0 kubenswrapper[26053]: E0318 09:03:39.887488 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fdb52116-9c55-4464-99c8-fc2e4559996b-images podName:fdb52116-9c55-4464-99c8-fc2e4559996b nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.387478084 +0000 UTC m=+7.880829485 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/fdb52116-9c55-4464-99c8-fc2e4559996b-images") pod "machine-api-operator-6fbb6cf6f9-n4t2h" (UID: "fdb52116-9c55-4464-99c8-fc2e4559996b") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.887728 master-0 kubenswrapper[26053]: I0318 09:03:39.887699 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-gx9ws" Mar 18 09:03:39.887893 master-0 kubenswrapper[26053]: E0318 09:03:39.887869 26053 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.887962 master-0 kubenswrapper[26053]: E0318 09:03:39.887933 26053 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.888007 master-0 kubenswrapper[26053]: E0318 09:03:39.887934 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cdcd27a4-6d46-47af-a14a-65f6501c10f0-config podName:cdcd27a4-6d46-47af-a14a-65f6501c10f0 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.387914455 +0000 UTC m=+7.881265846 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/cdcd27a4-6d46-47af-a14a-65f6501c10f0-config") pod "machine-approver-5c6485487f-r4mv6" (UID: "cdcd27a4-6d46-47af-a14a-65f6501c10f0") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.888007 master-0 kubenswrapper[26053]: E0318 09:03:39.887991 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b7ac7ef-060f-45d2-8988-006d45402e00-serving-cert podName:7b7ac7ef-060f-45d2-8988-006d45402e00 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.387979577 +0000 UTC m=+7.881330978 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7b7ac7ef-060f-45d2-8988-006d45402e00-serving-cert") pod "route-controller-manager-7dbcb47f86-ptccg" (UID: "7b7ac7ef-060f-45d2-8988-006d45402e00") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.888898 master-0 kubenswrapper[26053]: E0318 09:03:39.888877 26053 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.888898 master-0 kubenswrapper[26053]: E0318 09:03:39.888888 26053 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.888988 master-0 kubenswrapper[26053]: E0318 09:03:39.888931 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/25781967-12ce-490e-94aa-9b9722f495da-control-plane-machine-set-operator-tls podName:25781967-12ce-490e-94aa-9b9722f495da nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.388915241 +0000 UTC m=+7.882266632 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/25781967-12ce-490e-94aa-9b9722f495da-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6f97756bc8-s98kp" (UID: "25781967-12ce-490e-94aa-9b9722f495da") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.888988 master-0 kubenswrapper[26053]: E0318 09:03:39.888955 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-cloud-controller-manager-operator-tls podName:94e2a8f0-2c2e-43da-9fa9-69edfcd77830 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.388946282 +0000 UTC m=+7.882297673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-7dff898856-vwqc4" (UID: "94e2a8f0-2c2e-43da-9fa9-69edfcd77830") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.889057 master-0 kubenswrapper[26053]: E0318 09:03:39.889019 26053 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.889084 master-0 kubenswrapper[26053]: E0318 09:03:39.889060 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-auth-proxy-config podName:94e2a8f0-2c2e-43da-9fa9-69edfcd77830 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.389050245 +0000 UTC m=+7.882401626 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-7dff898856-vwqc4" (UID: "94e2a8f0-2c2e-43da-9fa9-69edfcd77830") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.889114 master-0 kubenswrapper[26053]: E0318 09:03:39.889089 26053 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.889188 master-0 kubenswrapper[26053]: E0318 09:03:39.889171 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-kube-rbac-proxy-config podName:15798f4d-8bcc-4e24-bb18-8dff1f4edf59 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.389128367 +0000 UTC m=+7.882479758 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7bbc969446-nbkgf" (UID: "15798f4d-8bcc-4e24-bb18-8dff1f4edf59") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.890217 master-0 kubenswrapper[26053]: E0318 09:03:39.890186 26053 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.890277 master-0 kubenswrapper[26053]: E0318 09:03:39.890212 26053 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.890277 master-0 kubenswrapper[26053]: E0318 09:03:39.890216 26053 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.890277 master-0 kubenswrapper[26053]: E0318 09:03:39.890188 26053 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.890277 master-0 kubenswrapper[26053]: E0318 09:03:39.890251 26053 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.890443 master-0 kubenswrapper[26053]: E0318 09:03:39.890229 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-tls podName:599418d3-6afa-46ab-9afa-659134f7ac94 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.390219145 +0000 UTC m=+7.883570526 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-tls") pod "node-exporter-kp8pg" (UID: "599418d3-6afa-46ab-9afa-659134f7ac94") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.890443 master-0 kubenswrapper[26053]: E0318 09:03:39.890308 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fdb52116-9c55-4464-99c8-fc2e4559996b-config podName:fdb52116-9c55-4464-99c8-fc2e4559996b nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.390290687 +0000 UTC m=+7.883642138 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/fdb52116-9c55-4464-99c8-fc2e4559996b-config") pod "machine-api-operator-6fbb6cf6f9-n4t2h" (UID: "fdb52116-9c55-4464-99c8-fc2e4559996b") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.890443 master-0 kubenswrapper[26053]: E0318 09:03:39.890238 26053 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.890443 master-0 kubenswrapper[26053]: E0318 09:03:39.890327 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14489ef7-8df3-4a3b-a137-3a78e89d425b-certs podName:14489ef7-8df3-4a3b-a137-3a78e89d425b nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.390318257 +0000 UTC m=+7.883669768 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/14489ef7-8df3-4a3b-a137-3a78e89d425b-certs") pod "machine-config-server-rw7hw" (UID: "14489ef7-8df3-4a3b-a137-3a78e89d425b") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.890443 master-0 kubenswrapper[26053]: E0318 09:03:39.890346 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/599418d3-6afa-46ab-9afa-659134f7ac94-metrics-client-ca podName:599418d3-6afa-46ab-9afa-659134f7ac94 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.390336468 +0000 UTC m=+7.883687939 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/599418d3-6afa-46ab-9afa-659134f7ac94-metrics-client-ca") pod "node-exporter-kp8pg" (UID: "599418d3-6afa-46ab-9afa-659134f7ac94") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.890443 master-0 kubenswrapper[26053]: E0318 09:03:39.890364 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-serving-cert podName:6e869b45-8ca6-485f-8b6f-b2fad3b02efe nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.390356438 +0000 UTC m=+7.883707939 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-serving-cert") pod "controller-manager-7d954fcfb-gpddv" (UID: "6e869b45-8ca6-485f-8b6f-b2fad3b02efe") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.890443 master-0 kubenswrapper[26053]: E0318 09:03:39.890389 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb8f3615-9e89-4b51-87a2-7d168c81adf3-config podName:eb8f3615-9e89-4b51-87a2-7d168c81adf3 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.390379809 +0000 UTC m=+7.883731360 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eb8f3615-9e89-4b51-87a2-7d168c81adf3-config") pod "cluster-baremetal-operator-6f69995874-mcd6d" (UID: "eb8f3615-9e89-4b51-87a2-7d168c81adf3") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.892351 master-0 kubenswrapper[26053]: E0318 09:03:39.892318 26053 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.892351 master-0 kubenswrapper[26053]: E0318 09:03:39.892346 26053 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.892472 master-0 kubenswrapper[26053]: E0318 09:03:39.892369 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-metrics-server-audit-profiles podName:87381a51-96e6-4e86-bdae-c8ac3fc7a039 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.39235186 +0000 UTC m=+7.885703231 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-metrics-server-audit-profiles") pod "metrics-server-7875f64c8-kmr8t" (UID: "87381a51-96e6-4e86-bdae-c8ac3fc7a039") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.892472 master-0 kubenswrapper[26053]: E0318 09:03:39.892374 26053 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.892472 master-0 kubenswrapper[26053]: E0318 09:03:39.892399 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-apiservice-cert podName:50a2c23f-26af-4c7f-8ea6-996bcfe173d0 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.39238288 +0000 UTC m=+7.885734271 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-apiservice-cert") pod "packageserver-c8d87f55b-gsv6r" (UID: "50a2c23f-26af-4c7f-8ea6-996bcfe173d0") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.892472 master-0 kubenswrapper[26053]: E0318 09:03:39.892428 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd-cert podName:9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.392415821 +0000 UTC m=+7.885767272 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd-cert") pod "ingress-canary-226gc" (UID: "9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.892472 master-0 kubenswrapper[26053]: E0318 09:03:39.892470 26053 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.892472 master-0 kubenswrapper[26053]: E0318 09:03:39.892470 26053 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.892692 master-0 kubenswrapper[26053]: E0318 09:03:39.892502 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-proxy-tls podName:a7cf2cff-ca67-4cc6-99e7-99478ab89af4 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.392494013 +0000 UTC m=+7.885845484 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-proxy-tls") pod "machine-config-daemon-rhm2f" (UID: "a7cf2cff-ca67-4cc6-99e7-99478ab89af4") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.892692 master-0 kubenswrapper[26053]: E0318 09:03:39.892517 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2b59dbf5-0a61-4981-aed3-e73550615c4a-metrics-client-ca podName:2b59dbf5-0a61-4981-aed3-e73550615c4a nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.392510424 +0000 UTC m=+7.885861805 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/2b59dbf5-0a61-4981-aed3-e73550615c4a-metrics-client-ca") pod "openshift-state-metrics-5dc6c74576-rm78n" (UID: "2b59dbf5-0a61-4981-aed3-e73550615c4a") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.893818 master-0 kubenswrapper[26053]: E0318 09:03:39.893788 26053 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.893818 master-0 kubenswrapper[26053]: E0318 09:03:39.893798 26053 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.893925 master-0 kubenswrapper[26053]: E0318 09:03:39.893840 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b59dbf5-0a61-4981-aed3-e73550615c4a-openshift-state-metrics-kube-rbac-proxy-config podName:2b59dbf5-0a61-4981-aed3-e73550615c4a nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.393827038 +0000 UTC m=+7.887178509 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/2b59dbf5-0a61-4981-aed3-e73550615c4a-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-5dc6c74576-rm78n" (UID: "2b59dbf5-0a61-4981-aed3-e73550615c4a") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.893925 master-0 kubenswrapper[26053]: E0318 09:03:39.893863 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b59dbf5-0a61-4981-aed3-e73550615c4a-openshift-state-metrics-tls podName:2b59dbf5-0a61-4981-aed3-e73550615c4a nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.393853988 +0000 UTC m=+7.887205479 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/2b59dbf5-0a61-4981-aed3-e73550615c4a-openshift-state-metrics-tls") pod "openshift-state-metrics-5dc6c74576-rm78n" (UID: "2b59dbf5-0a61-4981-aed3-e73550615c4a") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.896087 master-0 kubenswrapper[26053]: E0318 09:03:39.896063 26053 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.896129 master-0 kubenswrapper[26053]: E0318 09:03:39.896087 26053 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.896129 master-0 kubenswrapper[26053]: E0318 09:03:39.896103 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-client-ca podName:6e869b45-8ca6-485f-8b6f-b2fad3b02efe nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.396093376 +0000 UTC m=+7.889444757 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-client-ca") pod "controller-manager-7d954fcfb-gpddv" (UID: "6e869b45-8ca6-485f-8b6f-b2fad3b02efe") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.896129 master-0 kubenswrapper[26053]: E0318 09:03:39.896065 26053 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.896233 master-0 kubenswrapper[26053]: E0318 09:03:39.896138 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-kube-rbac-proxy-config podName:599418d3-6afa-46ab-9afa-659134f7ac94 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.396124307 +0000 UTC m=+7.889475708 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-kube-rbac-proxy-config") pod "node-exporter-kp8pg" (UID: "599418d3-6afa-46ab-9afa-659134f7ac94") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.896233 master-0 kubenswrapper[26053]: E0318 09:03:39.896165 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-mcd-auth-proxy-config podName:a7cf2cff-ca67-4cc6-99e7-99478ab89af4 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.396155437 +0000 UTC m=+7.889506828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-mcd-auth-proxy-config") pod "machine-config-daemon-rhm2f" (UID: "a7cf2cff-ca67-4cc6-99e7-99478ab89af4") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.897310 master-0 kubenswrapper[26053]: E0318 09:03:39.897287 26053 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.897364 master-0 kubenswrapper[26053]: E0318 09:03:39.897310 26053 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.897364 master-0 kubenswrapper[26053]: E0318 09:03:39.897339 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-webhook-cert podName:50a2c23f-26af-4c7f-8ea6-996bcfe173d0 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.397320827 +0000 UTC m=+7.890672228 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-webhook-cert") pod "packageserver-c8d87f55b-gsv6r" (UID: "50a2c23f-26af-4c7f-8ea6-996bcfe173d0") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.897364 master-0 kubenswrapper[26053]: E0318 09:03:39.897354 26053 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.897364 master-0 kubenswrapper[26053]: E0318 09:03:39.897358 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bef948b9-eef4-404b-9b49-6e4a2ceea73b-images podName:bef948b9-eef4-404b-9b49-6e4a2ceea73b nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.397350148 +0000 UTC m=+7.890701549 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/bef948b9-eef4-404b-9b49-6e4a2ceea73b-images") pod "machine-config-operator-84d549f6d5-vj84b" (UID: "bef948b9-eef4-404b-9b49-6e4a2ceea73b") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.897476 master-0 kubenswrapper[26053]: E0318 09:03:39.897377 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-custom-resource-state-configmap podName:15798f4d-8bcc-4e24-bb18-8dff1f4edf59 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.397369319 +0000 UTC m=+7.890720700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7bbc969446-nbkgf" (UID: "15798f4d-8bcc-4e24-bb18-8dff1f4edf59") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.897476 master-0 kubenswrapper[26053]: E0318 09:03:39.897383 26053 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.897476 master-0 kubenswrapper[26053]: E0318 09:03:39.897398 26053 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.897476 master-0 kubenswrapper[26053]: E0318 09:03:39.897414 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0cd1cf7-be6f-4baf-8761-69c693476de9-cloud-credential-operator-serving-cert podName:a0cd1cf7-be6f-4baf-8761-69c693476de9 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.39740594 +0000 UTC m=+7.890757341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/a0cd1cf7-be6f-4baf-8761-69c693476de9-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-744f9dbf77-9xqgw" (UID: "a0cd1cf7-be6f-4baf-8761-69c693476de9") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.897476 master-0 kubenswrapper[26053]: E0318 09:03:39.897430 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8683c8c6-3a77-4b46-8898-142f9781b49c-prometheus-operator-kube-rbac-proxy-config podName:8683c8c6-3a77-4b46-8898-142f9781b49c nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.39742325 +0000 UTC m=+7.890774641 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/8683c8c6-3a77-4b46-8898-142f9781b49c-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-6c8df6d4b-rqgh5" (UID: "8683c8c6-3a77-4b46-8898-142f9781b49c") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.899625 master-0 kubenswrapper[26053]: E0318 09:03:39.899600 26053 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.899674 master-0 kubenswrapper[26053]: E0318 09:03:39.899637 26053 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.899674 master-0 kubenswrapper[26053]: E0318 09:03:39.899661 26053 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.899740 master-0 kubenswrapper[26053]: E0318 09:03:39.899641 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7205eeb-912b-4c31-b08f-ed0b2a1319aa-proxy-tls podName:d7205eeb-912b-4c31-b08f-ed0b2a1319aa nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.399633237 +0000 UTC m=+7.892984618 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/d7205eeb-912b-4c31-b08f-ed0b2a1319aa-proxy-tls") pod "machine-config-controller-b4f87c5b9-prrnd" (UID: "d7205eeb-912b-4c31-b08f-ed0b2a1319aa") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.899740 master-0 kubenswrapper[26053]: E0318 09:03:39.899684 26053 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.899740 master-0 kubenswrapper[26053]: E0318 09:03:39.899694 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3898c28b-69b0-46af-b085-37e12d7d80ba-samples-operator-tls podName:3898c28b-69b0-46af-b085-37e12d7d80ba nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.399686678 +0000 UTC m=+7.893038059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/3898c28b-69b0-46af-b085-37e12d7d80ba-samples-operator-tls") pod "cluster-samples-operator-85f7577d78-xjbb5" (UID: "3898c28b-69b0-46af-b085-37e12d7d80ba") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.899740 master-0 kubenswrapper[26053]: E0318 09:03:39.899706 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a0cd1cf7-be6f-4baf-8761-69c693476de9-cco-trusted-ca podName:a0cd1cf7-be6f-4baf-8761-69c693476de9 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.399700169 +0000 UTC m=+7.893051550 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/a0cd1cf7-be6f-4baf-8761-69c693476de9-cco-trusted-ca") pod "cloud-credential-operator-744f9dbf77-9xqgw" (UID: "a0cd1cf7-be6f-4baf-8761-69c693476de9") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.899740 master-0 kubenswrapper[26053]: E0318 09:03:39.899716 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8683c8c6-3a77-4b46-8898-142f9781b49c-prometheus-operator-tls podName:8683c8c6-3a77-4b46-8898-142f9781b49c nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.399711239 +0000 UTC m=+7.893062620 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/8683c8c6-3a77-4b46-8898-142f9781b49c-prometheus-operator-tls") pod "prometheus-operator-6c8df6d4b-rqgh5" (UID: "8683c8c6-3a77-4b46-8898-142f9781b49c") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.899740 master-0 kubenswrapper[26053]: E0318 09:03:39.899720 26053 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.899905 master-0 kubenswrapper[26053]: E0318 09:03:39.899758 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bef948b9-eef4-404b-9b49-6e4a2ceea73b-proxy-tls podName:bef948b9-eef4-404b-9b49-6e4a2ceea73b nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.39974728 +0000 UTC m=+7.893098671 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/bef948b9-eef4-404b-9b49-6e4a2ceea73b-proxy-tls") pod "machine-config-operator-84d549f6d5-vj84b" (UID: "bef948b9-eef4-404b-9b49-6e4a2ceea73b") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.900844 master-0 kubenswrapper[26053]: E0318 09:03:39.900823 26053 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.900891 master-0 kubenswrapper[26053]: E0318 09:03:39.900863 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e88b021c-c810-4a68-aa48-d8666b52330e-cert podName:e88b021c-c810-4a68-aa48-d8666b52330e nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.400854158 +0000 UTC m=+7.894205539 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e88b021c-c810-4a68-aa48-d8666b52330e-cert") pod "cluster-autoscaler-operator-866dc4744-tx2pv" (UID: "e88b021c-c810-4a68-aa48-d8666b52330e") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.900891 master-0 kubenswrapper[26053]: E0318 09:03:39.900880 26053 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.900961 master-0 kubenswrapper[26053]: E0318 09:03:39.900900 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-client-certs podName:87381a51-96e6-4e86-bdae-c8ac3fc7a039 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.400894739 +0000 UTC m=+7.894246120 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-client-certs") pod "metrics-server-7875f64c8-kmr8t" (UID: "87381a51-96e6-4e86-bdae-c8ac3fc7a039") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.900961 master-0 kubenswrapper[26053]: E0318 09:03:39.900928 26053 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.900961 master-0 kubenswrapper[26053]: E0318 09:03:39.900953 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fdb52116-9c55-4464-99c8-fc2e4559996b-machine-api-operator-tls podName:fdb52116-9c55-4464-99c8-fc2e4559996b nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.400946991 +0000 UTC m=+7.894298372 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/fdb52116-9c55-4464-99c8-fc2e4559996b-machine-api-operator-tls") pod "machine-api-operator-6fbb6cf6f9-n4t2h" (UID: "fdb52116-9c55-4464-99c8-fc2e4559996b") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.901614 master-0 kubenswrapper[26053]: E0318 09:03:39.901036 26053 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.901614 master-0 kubenswrapper[26053]: E0318 09:03:39.901054 26053 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.901614 master-0 kubenswrapper[26053]: E0318 09:03:39.901070 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14489ef7-8df3-4a3b-a137-3a78e89d425b-node-bootstrap-token podName:14489ef7-8df3-4a3b-a137-3a78e89d425b nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.401060504 +0000 UTC m=+7.894411885 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/14489ef7-8df3-4a3b-a137-3a78e89d425b-node-bootstrap-token") pod "machine-config-server-rw7hw" (UID: "14489ef7-8df3-4a3b-a137-3a78e89d425b") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.901614 master-0 kubenswrapper[26053]: E0318 09:03:39.901089 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f918d08d-df7c-4e8d-85ba-1c92d766db16-serving-cert podName:f918d08d-df7c-4e8d-85ba-1c92d766db16 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.401077534 +0000 UTC m=+7.894428915 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f918d08d-df7c-4e8d-85ba-1c92d766db16-serving-cert") pod "insights-operator-68bf6ff9d6-89rtc" (UID: "f918d08d-df7c-4e8d-85ba-1c92d766db16") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.902281 master-0 kubenswrapper[26053]: E0318 09:03:39.902258 26053 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.902347 master-0 kubenswrapper[26053]: E0318 09:03:39.902319 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb8f3615-9e89-4b51-87a2-7d168c81adf3-images podName:eb8f3615-9e89-4b51-87a2-7d168c81adf3 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.402306706 +0000 UTC m=+7.895658167 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/eb8f3615-9e89-4b51-87a2-7d168c81adf3-images") pod "cluster-baremetal-operator-6f69995874-mcd6d" (UID: "eb8f3615-9e89-4b51-87a2-7d168c81adf3") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.902403 master-0 kubenswrapper[26053]: E0318 09:03:39.902367 26053 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2gj3dpncb7vk4: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.902445 master-0 kubenswrapper[26053]: E0318 09:03:39.902435 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-client-ca-bundle podName:87381a51-96e6-4e86-bdae-c8ac3fc7a039 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.402420159 +0000 UTC m=+7.895771600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-client-ca-bundle") pod "metrics-server-7875f64c8-kmr8t" (UID: "87381a51-96e6-4e86-bdae-c8ac3fc7a039") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.903474 master-0 kubenswrapper[26053]: E0318 09:03:39.903408 26053 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.903559 master-0 kubenswrapper[26053]: E0318 09:03:39.903483 26053 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.903559 master-0 kubenswrapper[26053]: E0318 09:03:39.903482 26053 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.903559 master-0 kubenswrapper[26053]: E0318 09:03:39.903521 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8683c8c6-3a77-4b46-8898-142f9781b49c-metrics-client-ca podName:8683c8c6-3a77-4b46-8898-142f9781b49c nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.403500716 +0000 UTC m=+7.896852117 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/8683c8c6-3a77-4b46-8898-142f9781b49c-metrics-client-ca") pod "prometheus-operator-6c8df6d4b-rqgh5" (UID: "8683c8c6-3a77-4b46-8898-142f9781b49c") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.903559 master-0 kubenswrapper[26053]: E0318 09:03:39.903456 26053 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.903559 master-0 kubenswrapper[26053]: E0318 09:03:39.903543 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d7205eeb-912b-4c31-b08f-ed0b2a1319aa-mcc-auth-proxy-config podName:d7205eeb-912b-4c31-b08f-ed0b2a1319aa nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.403534147 +0000 UTC m=+7.896885538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcc-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/d7205eeb-912b-4c31-b08f-ed0b2a1319aa-mcc-auth-proxy-config") pod "machine-config-controller-b4f87c5b9-prrnd" (UID: "d7205eeb-912b-4c31-b08f-ed0b2a1319aa") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.903559 master-0 kubenswrapper[26053]: E0318 09:03:39.903581 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-images podName:94e2a8f0-2c2e-43da-9fa9-69edfcd77830 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.403553178 +0000 UTC m=+7.896904579 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-images") pod "cluster-cloud-controller-manager-operator-7dff898856-vwqc4" (UID: "94e2a8f0-2c2e-43da-9fa9-69edfcd77830") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.903559 master-0 kubenswrapper[26053]: E0318 09:03:39.903600 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e88b021c-c810-4a68-aa48-d8666b52330e-auth-proxy-config podName:e88b021c-c810-4a68-aa48-d8666b52330e nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.403592539 +0000 UTC m=+7.896943940 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/e88b021c-c810-4a68-aa48-d8666b52330e-auth-proxy-config") pod "cluster-autoscaler-operator-866dc4744-tx2pv" (UID: "e88b021c-c810-4a68-aa48-d8666b52330e") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.903843 master-0 kubenswrapper[26053]: E0318 09:03:39.903630 26053 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.903843 master-0 kubenswrapper[26053]: E0318 09:03:39.903662 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b7ac7ef-060f-45d2-8988-006d45402e00-client-ca podName:7b7ac7ef-060f-45d2-8988-006d45402e00 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.40365364 +0000 UTC m=+7.897005041 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/7b7ac7ef-060f-45d2-8988-006d45402e00-client-ca") pod "route-controller-manager-7dbcb47f86-ptccg" (UID: "7b7ac7ef-060f-45d2-8988-006d45402e00") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.905911 master-0 kubenswrapper[26053]: E0318 09:03:39.905885 26053 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.905985 master-0 kubenswrapper[26053]: E0318 09:03:39.905918 26053 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.905985 master-0 kubenswrapper[26053]: E0318 09:03:39.905948 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-tls podName:15798f4d-8bcc-4e24-bb18-8dff1f4edf59 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.405932479 +0000 UTC m=+7.899283910 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-tls") pod "kube-state-metrics-7bbc969446-nbkgf" (UID: "15798f4d-8bcc-4e24-bb18-8dff1f4edf59") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:39.905985 master-0 kubenswrapper[26053]: E0318 09:03:39.905894 26053 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.905985 master-0 kubenswrapper[26053]: E0318 09:03:39.905972 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cdcd27a4-6d46-47af-a14a-65f6501c10f0-auth-proxy-config podName:cdcd27a4-6d46-47af-a14a-65f6501c10f0 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.40596375 +0000 UTC m=+7.899315241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/cdcd27a4-6d46-47af-a14a-65f6501c10f0-auth-proxy-config") pod "machine-approver-5c6485487f-r4mv6" (UID: "cdcd27a4-6d46-47af-a14a-65f6501c10f0") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.906153 master-0 kubenswrapper[26053]: E0318 09:03:39.905993 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-metrics-client-ca podName:15798f4d-8bcc-4e24-bb18-8dff1f4edf59 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.405981 +0000 UTC m=+7.899332381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-metrics-client-ca") pod "kube-state-metrics-7bbc969446-nbkgf" (UID: "15798f4d-8bcc-4e24-bb18-8dff1f4edf59") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.906153 master-0 kubenswrapper[26053]: E0318 09:03:39.906125 26053 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.906233 master-0 kubenswrapper[26053]: E0318 09:03:39.906156 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-proxy-ca-bundles podName:6e869b45-8ca6-485f-8b6f-b2fad3b02efe nodeName:}" failed. No retries permitted until 2026-03-18 09:03:40.406147865 +0000 UTC m=+7.899499246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-proxy-ca-bundles") pod "controller-manager-7d954fcfb-gpddv" (UID: "6e869b45-8ca6-485f-8b6f-b2fad3b02efe") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:39.907254 master-0 kubenswrapper[26053]: I0318 09:03:39.907228 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 09:03:39.927750 master-0 kubenswrapper[26053]: I0318 09:03:39.927682 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 09:03:39.949450 master-0 kubenswrapper[26053]: I0318 09:03:39.949339 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 09:03:39.968275 master-0 kubenswrapper[26053]: I0318 09:03:39.968224 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-kvnts" Mar 18 09:03:39.991064 master-0 kubenswrapper[26053]: I0318 09:03:39.991023 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 18 09:03:40.009836 master-0 kubenswrapper[26053]: I0318 09:03:40.009795 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 18 09:03:40.027269 master-0 kubenswrapper[26053]: I0318 09:03:40.027221 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 18 09:03:40.047683 master-0 kubenswrapper[26053]: I0318 09:03:40.047184 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 18 09:03:40.068460 master-0 kubenswrapper[26053]: I0318 09:03:40.068180 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-fhncm" Mar 18 09:03:40.088279 master-0 kubenswrapper[26053]: I0318 09:03:40.088223 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 18 09:03:40.112734 master-0 kubenswrapper[26053]: I0318 09:03:40.112634 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-kldf7" Mar 18 09:03:40.131966 master-0 kubenswrapper[26053]: I0318 09:03:40.131922 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 09:03:40.155375 master-0 kubenswrapper[26053]: I0318 09:03:40.150837 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-pws99" Mar 18 09:03:40.167037 master-0 kubenswrapper[26053]: I0318 09:03:40.166993 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 18 09:03:40.180175 master-0 kubenswrapper[26053]: I0318 09:03:40.180128 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/800297fe-77fd-4f58-ade2-32a147cd7d5c-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 09:03:40.187329 master-0 kubenswrapper[26053]: I0318 09:03:40.187121 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 09:03:40.207430 master-0 kubenswrapper[26053]: I0318 09:03:40.207373 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 09:03:40.226665 master-0 kubenswrapper[26053]: I0318 09:03:40.226619 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 09:03:40.256880 master-0 kubenswrapper[26053]: I0318 09:03:40.256822 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 09:03:40.270366 master-0 kubenswrapper[26053]: I0318 09:03:40.270312 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-jqmlx" Mar 18 09:03:40.302607 master-0 kubenswrapper[26053]: I0318 09:03:40.302475 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 18 09:03:40.312589 master-0 kubenswrapper[26053]: I0318 09:03:40.308432 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 18 09:03:40.329351 master-0 kubenswrapper[26053]: I0318 09:03:40.327852 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 18 09:03:40.346806 master-0 kubenswrapper[26053]: I0318 09:03:40.346770 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-khzbd" Mar 18 09:03:40.367986 master-0 kubenswrapper[26053]: I0318 09:03:40.367949 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 18 09:03:40.370726 master-0 kubenswrapper[26053]: I0318 09:03:40.370693 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:03:40.387608 master-0 kubenswrapper[26053]: I0318 09:03:40.387575 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 18 09:03:40.406730 master-0 kubenswrapper[26053]: I0318 09:03:40.406675 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-nxx2s" Mar 18 09:03:40.427626 master-0 kubenswrapper[26053]: I0318 09:03:40.427557 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 18 09:03:40.441407 master-0 kubenswrapper[26053]: I0318 09:03:40.441370 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 09:03:40.441533 master-0 kubenswrapper[26053]: I0318 09:03:40.441425 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-webhook-cert\") pod \"packageserver-c8d87f55b-gsv6r\" (UID: \"50a2c23f-26af-4c7f-8ea6-996bcfe173d0\") " pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 09:03:40.441533 master-0 kubenswrapper[26053]: I0318 09:03:40.441447 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0cd1cf7-be6f-4baf-8761-69c693476de9-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-9xqgw\" (UID: \"a0cd1cf7-be6f-4baf-8761-69c693476de9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" Mar 18 09:03:40.441533 master-0 kubenswrapper[26053]: I0318 09:03:40.441466 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8683c8c6-3a77-4b46-8898-142f9781b49c-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-rqgh5\" (UID: \"8683c8c6-3a77-4b46-8898-142f9781b49c\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 09:03:40.441533 master-0 kubenswrapper[26053]: I0318 09:03:40.441501 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-mcd-auth-proxy-config\") pod \"machine-config-daemon-rhm2f\" (UID: \"a7cf2cff-ca67-4cc6-99e7-99478ab89af4\") " pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 09:03:40.441533 master-0 kubenswrapper[26053]: I0318 09:03:40.441519 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0cd1cf7-be6f-4baf-8761-69c693476de9-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-9xqgw\" (UID: \"a0cd1cf7-be6f-4baf-8761-69c693476de9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" Mar 18 09:03:40.441787 master-0 kubenswrapper[26053]: I0318 09:03:40.441537 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3898c28b-69b0-46af-b085-37e12d7d80ba-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-xjbb5\" (UID: \"3898c28b-69b0-46af-b085-37e12d7d80ba\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xjbb5" Mar 18 09:03:40.441787 master-0 kubenswrapper[26053]: I0318 09:03:40.441599 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bef948b9-eef4-404b-9b49-6e4a2ceea73b-images\") pod \"machine-config-operator-84d549f6d5-vj84b\" (UID: \"bef948b9-eef4-404b-9b49-6e4a2ceea73b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 09:03:40.441787 master-0 kubenswrapper[26053]: I0318 09:03:40.441685 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/8683c8c6-3a77-4b46-8898-142f9781b49c-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-rqgh5\" (UID: \"8683c8c6-3a77-4b46-8898-142f9781b49c\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 09:03:40.441787 master-0 kubenswrapper[26053]: I0318 09:03:40.441712 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fdb52116-9c55-4464-99c8-fc2e4559996b-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-n4t2h\" (UID: \"fdb52116-9c55-4464-99c8-fc2e4559996b\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 09:03:40.441787 master-0 kubenswrapper[26053]: I0318 09:03:40.441728 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bef948b9-eef4-404b-9b49-6e4a2ceea73b-proxy-tls\") pod \"machine-config-operator-84d549f6d5-vj84b\" (UID: \"bef948b9-eef4-404b-9b49-6e4a2ceea73b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 09:03:40.441787 master-0 kubenswrapper[26053]: I0318 09:03:40.441746 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d7205eeb-912b-4c31-b08f-ed0b2a1319aa-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-prrnd\" (UID: \"d7205eeb-912b-4c31-b08f-ed0b2a1319aa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" Mar 18 09:03:40.441787 master-0 kubenswrapper[26053]: I0318 09:03:40.441764 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-client-certs\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:40.441787 master-0 kubenswrapper[26053]: I0318 09:03:40.441780 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e88b021c-c810-4a68-aa48-d8666b52330e-cert\") pod \"cluster-autoscaler-operator-866dc4744-tx2pv\" (UID: \"e88b021c-c810-4a68-aa48-d8666b52330e\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" Mar 18 09:03:40.442063 master-0 kubenswrapper[26053]: I0318 09:03:40.441817 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f918d08d-df7c-4e8d-85ba-1c92d766db16-serving-cert\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 09:03:40.442063 master-0 kubenswrapper[26053]: I0318 09:03:40.441833 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/14489ef7-8df3-4a3b-a137-3a78e89d425b-node-bootstrap-token\") pod \"machine-config-server-rw7hw\" (UID: \"14489ef7-8df3-4a3b-a137-3a78e89d425b\") " pod="openshift-machine-config-operator/machine-config-server-rw7hw" Mar 18 09:03:40.442063 master-0 kubenswrapper[26053]: I0318 09:03:40.441889 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b7ac7ef-060f-45d2-8988-006d45402e00-client-ca\") pod \"route-controller-manager-7dbcb47f86-ptccg\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 09:03:40.442063 master-0 kubenswrapper[26053]: I0318 09:03:40.441906 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eb8f3615-9e89-4b51-87a2-7d168c81adf3-images\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 09:03:40.442063 master-0 kubenswrapper[26053]: I0318 09:03:40.441923 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-client-ca-bundle\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:40.442063 master-0 kubenswrapper[26053]: I0318 09:03:40.441943 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 09:03:40.442063 master-0 kubenswrapper[26053]: I0318 09:03:40.441962 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d7205eeb-912b-4c31-b08f-ed0b2a1319aa-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-prrnd\" (UID: \"d7205eeb-912b-4c31-b08f-ed0b2a1319aa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" Mar 18 09:03:40.442063 master-0 kubenswrapper[26053]: I0318 09:03:40.441984 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e88b021c-c810-4a68-aa48-d8666b52330e-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-tx2pv\" (UID: \"e88b021c-c810-4a68-aa48-d8666b52330e\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" Mar 18 09:03:40.442063 master-0 kubenswrapper[26053]: I0318 09:03:40.442001 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8683c8c6-3a77-4b46-8898-142f9781b49c-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-rqgh5\" (UID: \"8683c8c6-3a77-4b46-8898-142f9781b49c\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 09:03:40.442063 master-0 kubenswrapper[26053]: I0318 09:03:40.442037 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cdcd27a4-6d46-47af-a14a-65f6501c10f0-auth-proxy-config\") pod \"machine-approver-5c6485487f-r4mv6\" (UID: \"cdcd27a4-6d46-47af-a14a-65f6501c10f0\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 09:03:40.442063 master-0 kubenswrapper[26053]: I0318 09:03:40.442061 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-proxy-ca-bundles\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 09:03:40.442461 master-0 kubenswrapper[26053]: I0318 09:03:40.442077 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 09:03:40.442461 master-0 kubenswrapper[26053]: I0318 09:03:40.442094 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 09:03:40.442461 master-0 kubenswrapper[26053]: I0318 09:03:40.442133 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdcd27a4-6d46-47af-a14a-65f6501c10f0-config\") pod \"machine-approver-5c6485487f-r4mv6\" (UID: \"cdcd27a4-6d46-47af-a14a-65f6501c10f0\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 09:03:40.442461 master-0 kubenswrapper[26053]: I0318 09:03:40.442151 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b7ac7ef-060f-45d2-8988-006d45402e00-serving-cert\") pod \"route-controller-manager-7dbcb47f86-ptccg\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 09:03:40.442461 master-0 kubenswrapper[26053]: I0318 09:03:40.442169 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fdb52116-9c55-4464-99c8-fc2e4559996b-images\") pod \"machine-api-operator-6fbb6cf6f9-n4t2h\" (UID: \"fdb52116-9c55-4464-99c8-fc2e4559996b\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 09:03:40.442461 master-0 kubenswrapper[26053]: I0318 09:03:40.442191 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-server-tls\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:40.442461 master-0 kubenswrapper[26053]: I0318 09:03:40.442236 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-config\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 09:03:40.442461 master-0 kubenswrapper[26053]: I0318 09:03:40.442253 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bef948b9-eef4-404b-9b49-6e4a2ceea73b-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-vj84b\" (UID: \"bef948b9-eef4-404b-9b49-6e4a2ceea73b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 09:03:40.442461 master-0 kubenswrapper[26053]: I0318 09:03:40.442269 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-serving-cert\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 09:03:40.442461 master-0 kubenswrapper[26053]: I0318 09:03:40.442285 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/599418d3-6afa-46ab-9afa-659134f7ac94-metrics-client-ca\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 09:03:40.442461 master-0 kubenswrapper[26053]: I0318 09:03:40.442304 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/cdcd27a4-6d46-47af-a14a-65f6501c10f0-machine-approver-tls\") pod \"machine-approver-5c6485487f-r4mv6\" (UID: \"cdcd27a4-6d46-47af-a14a-65f6501c10f0\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 09:03:40.442461 master-0 kubenswrapper[26053]: I0318 09:03:40.442327 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b7ac7ef-060f-45d2-8988-006d45402e00-config\") pod \"route-controller-manager-7dbcb47f86-ptccg\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 09:03:40.442461 master-0 kubenswrapper[26053]: I0318 09:03:40.442357 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 09:03:40.442461 master-0 kubenswrapper[26053]: I0318 09:03:40.442375 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/cdf1c657-a9dc-455a-b2fd-27a518bc5199-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-4jrzp\" (UID: \"cdf1c657-a9dc-455a-b2fd-27a518bc5199\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4jrzp" Mar 18 09:03:40.442461 master-0 kubenswrapper[26053]: I0318 09:03:40.442418 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f918d08d-df7c-4e8d-85ba-1c92d766db16-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 09:03:40.442993 master-0 kubenswrapper[26053]: I0318 09:03:40.442486 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-tls\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 09:03:40.442993 master-0 kubenswrapper[26053]: I0318 09:03:40.442512 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-apiservice-cert\") pod \"packageserver-c8d87f55b-gsv6r\" (UID: \"50a2c23f-26af-4c7f-8ea6-996bcfe173d0\") " pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 09:03:40.442993 master-0 kubenswrapper[26053]: I0318 09:03:40.442556 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 09:03:40.442993 master-0 kubenswrapper[26053]: I0318 09:03:40.442623 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/25781967-12ce-490e-94aa-9b9722f495da-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-s98kp\" (UID: \"25781967-12ce-490e-94aa-9b9722f495da\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-s98kp" Mar 18 09:03:40.442993 master-0 kubenswrapper[26053]: I0318 09:03:40.442672 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:40.442993 master-0 kubenswrapper[26053]: I0318 09:03:40.442700 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 09:03:40.442993 master-0 kubenswrapper[26053]: I0318 09:03:40.442759 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb8f3615-9e89-4b51-87a2-7d168c81adf3-config\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 09:03:40.442993 master-0 kubenswrapper[26053]: I0318 09:03:40.442794 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/14489ef7-8df3-4a3b-a137-3a78e89d425b-certs\") pod \"machine-config-server-rw7hw\" (UID: \"14489ef7-8df3-4a3b-a137-3a78e89d425b\") " pod="openshift-machine-config-operator/machine-config-server-rw7hw" Mar 18 09:03:40.442993 master-0 kubenswrapper[26053]: I0318 09:03:40.442820 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdb52116-9c55-4464-99c8-fc2e4559996b-config\") pod \"machine-api-operator-6fbb6cf6f9-n4t2h\" (UID: \"fdb52116-9c55-4464-99c8-fc2e4559996b\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 09:03:40.442993 master-0 kubenswrapper[26053]: I0318 09:03:40.442895 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-proxy-tls\") pod \"machine-config-daemon-rhm2f\" (UID: \"a7cf2cff-ca67-4cc6-99e7-99478ab89af4\") " pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 09:03:40.442993 master-0 kubenswrapper[26053]: I0318 09:03:40.442918 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd-cert\") pod \"ingress-canary-226gc\" (UID: \"9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd\") " pod="openshift-ingress-canary/ingress-canary-226gc" Mar 18 09:03:40.442993 master-0 kubenswrapper[26053]: I0318 09:03:40.442947 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-metrics-server-audit-profiles\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:40.442993 master-0 kubenswrapper[26053]: I0318 09:03:40.442970 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2b59dbf5-0a61-4981-aed3-e73550615c4a-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 09:03:40.442993 master-0 kubenswrapper[26053]: I0318 09:03:40.442988 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b59dbf5-0a61-4981-aed3-e73550615c4a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 09:03:40.443381 master-0 kubenswrapper[26053]: I0318 09:03:40.443013 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2b59dbf5-0a61-4981-aed3-e73550615c4a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 09:03:40.443381 master-0 kubenswrapper[26053]: I0318 09:03:40.443029 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 09:03:40.443381 master-0 kubenswrapper[26053]: I0318 09:03:40.443047 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-client-ca\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 09:03:40.443381 master-0 kubenswrapper[26053]: I0318 09:03:40.443351 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-webhook-cert\") pod \"packageserver-c8d87f55b-gsv6r\" (UID: \"50a2c23f-26af-4c7f-8ea6-996bcfe173d0\") " pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 09:03:40.443723 master-0 kubenswrapper[26053]: I0318 09:03:40.443551 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-mcd-auth-proxy-config\") pod \"machine-config-daemon-rhm2f\" (UID: \"a7cf2cff-ca67-4cc6-99e7-99478ab89af4\") " pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 09:03:40.444264 master-0 kubenswrapper[26053]: I0318 09:03:40.444236 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bef948b9-eef4-404b-9b49-6e4a2ceea73b-images\") pod \"machine-config-operator-84d549f6d5-vj84b\" (UID: \"bef948b9-eef4-404b-9b49-6e4a2ceea73b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 09:03:40.444320 master-0 kubenswrapper[26053]: I0318 09:03:40.444245 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3898c28b-69b0-46af-b085-37e12d7d80ba-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-xjbb5\" (UID: \"3898c28b-69b0-46af-b085-37e12d7d80ba\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xjbb5" Mar 18 09:03:40.444422 master-0 kubenswrapper[26053]: I0318 09:03:40.444383 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/bef948b9-eef4-404b-9b49-6e4a2ceea73b-proxy-tls\") pod \"machine-config-operator-84d549f6d5-vj84b\" (UID: \"bef948b9-eef4-404b-9b49-6e4a2ceea73b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 09:03:40.444524 master-0 kubenswrapper[26053]: I0318 09:03:40.444493 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e88b021c-c810-4a68-aa48-d8666b52330e-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-tx2pv\" (UID: \"e88b021c-c810-4a68-aa48-d8666b52330e\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" Mar 18 09:03:40.444726 master-0 kubenswrapper[26053]: I0318 09:03:40.444693 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/25781967-12ce-490e-94aa-9b9722f495da-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-s98kp\" (UID: \"25781967-12ce-490e-94aa-9b9722f495da\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-s98kp" Mar 18 09:03:40.444910 master-0 kubenswrapper[26053]: I0318 09:03:40.444880 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d7205eeb-912b-4c31-b08f-ed0b2a1319aa-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-prrnd\" (UID: \"d7205eeb-912b-4c31-b08f-ed0b2a1319aa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" Mar 18 09:03:40.445174 master-0 kubenswrapper[26053]: I0318 09:03:40.445141 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eb8f3615-9e89-4b51-87a2-7d168c81adf3-images\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 09:03:40.445284 master-0 kubenswrapper[26053]: I0318 09:03:40.445264 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb8f3615-9e89-4b51-87a2-7d168c81adf3-config\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 09:03:40.445500 master-0 kubenswrapper[26053]: I0318 09:03:40.445479 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-apiservice-cert\") pod \"packageserver-c8d87f55b-gsv6r\" (UID: \"50a2c23f-26af-4c7f-8ea6-996bcfe173d0\") " pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 09:03:40.445621 master-0 kubenswrapper[26053]: I0318 09:03:40.445599 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f918d08d-df7c-4e8d-85ba-1c92d766db16-serving-cert\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 09:03:40.445676 master-0 kubenswrapper[26053]: I0318 09:03:40.445630 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f918d08d-df7c-4e8d-85ba-1c92d766db16-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 09:03:40.445818 master-0 kubenswrapper[26053]: I0318 09:03:40.445798 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e88b021c-c810-4a68-aa48-d8666b52330e-cert\") pod \"cluster-autoscaler-operator-866dc4744-tx2pv\" (UID: \"e88b021c-c810-4a68-aa48-d8666b52330e\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" Mar 18 09:03:40.445876 master-0 kubenswrapper[26053]: I0318 09:03:40.445828 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/bef948b9-eef4-404b-9b49-6e4a2ceea73b-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-vj84b\" (UID: \"bef948b9-eef4-404b-9b49-6e4a2ceea73b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 09:03:40.447104 master-0 kubenswrapper[26053]: I0318 09:03:40.447073 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 18 09:03:40.455488 master-0 kubenswrapper[26053]: I0318 09:03:40.455441 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0cd1cf7-be6f-4baf-8761-69c693476de9-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-9xqgw\" (UID: \"a0cd1cf7-be6f-4baf-8761-69c693476de9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" Mar 18 09:03:40.466789 master-0 kubenswrapper[26053]: I0318 09:03:40.466743 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-m9g5m" Mar 18 09:03:40.493835 master-0 kubenswrapper[26053]: I0318 09:03:40.493791 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 18 09:03:40.504859 master-0 kubenswrapper[26053]: I0318 09:03:40.504812 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a0cd1cf7-be6f-4baf-8761-69c693476de9-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-9xqgw\" (UID: \"a0cd1cf7-be6f-4baf-8761-69c693476de9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" Mar 18 09:03:40.507310 master-0 kubenswrapper[26053]: I0318 09:03:40.507276 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 18 09:03:40.527475 master-0 kubenswrapper[26053]: I0318 09:03:40.527421 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 18 09:03:40.547963 master-0 kubenswrapper[26053]: I0318 09:03:40.547905 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 18 09:03:40.554737 master-0 kubenswrapper[26053]: I0318 09:03:40.554629 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/fdb52116-9c55-4464-99c8-fc2e4559996b-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-n4t2h\" (UID: \"fdb52116-9c55-4464-99c8-fc2e4559996b\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 09:03:40.567277 master-0 kubenswrapper[26053]: I0318 09:03:40.567235 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-d6jf5" Mar 18 09:03:40.586926 master-0 kubenswrapper[26053]: I0318 09:03:40.586869 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 18 09:03:40.594226 master-0 kubenswrapper[26053]: I0318 09:03:40.594187 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fdb52116-9c55-4464-99c8-fc2e4559996b-images\") pod \"machine-api-operator-6fbb6cf6f9-n4t2h\" (UID: \"fdb52116-9c55-4464-99c8-fc2e4559996b\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 09:03:40.607476 master-0 kubenswrapper[26053]: I0318 09:03:40.607406 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 18 09:03:40.616435 master-0 kubenswrapper[26053]: I0318 09:03:40.616361 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdb52116-9c55-4464-99c8-fc2e4559996b-config\") pod \"machine-api-operator-6fbb6cf6f9-n4t2h\" (UID: \"fdb52116-9c55-4464-99c8-fc2e4559996b\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 09:03:40.626977 master-0 kubenswrapper[26053]: I0318 09:03:40.626933 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-xhpr4" Mar 18 09:03:40.647393 master-0 kubenswrapper[26053]: I0318 09:03:40.647341 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 18 09:03:40.656456 master-0 kubenswrapper[26053]: I0318 09:03:40.656416 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-proxy-tls\") pod \"machine-config-daemon-rhm2f\" (UID: \"a7cf2cff-ca67-4cc6-99e7-99478ab89af4\") " pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 09:03:40.666658 master-0 kubenswrapper[26053]: I0318 09:03:40.666596 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-s4fhp" Mar 18 09:03:40.686375 master-0 kubenswrapper[26053]: I0318 09:03:40.686336 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 18 09:03:40.696098 master-0 kubenswrapper[26053]: I0318 09:03:40.696052 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/cdf1c657-a9dc-455a-b2fd-27a518bc5199-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-4jrzp\" (UID: \"cdf1c657-a9dc-455a-b2fd-27a518bc5199\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4jrzp" Mar 18 09:03:40.707774 master-0 kubenswrapper[26053]: I0318 09:03:40.707713 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-vvwvf" Mar 18 09:03:40.727154 master-0 kubenswrapper[26053]: I0318 09:03:40.727101 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 09:03:40.736029 master-0 kubenswrapper[26053]: I0318 09:03:40.735986 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/cdcd27a4-6d46-47af-a14a-65f6501c10f0-machine-approver-tls\") pod \"machine-approver-5c6485487f-r4mv6\" (UID: \"cdcd27a4-6d46-47af-a14a-65f6501c10f0\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 09:03:40.748249 master-0 kubenswrapper[26053]: I0318 09:03:40.748184 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 09:03:40.754974 master-0 kubenswrapper[26053]: I0318 09:03:40.754933 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cdcd27a4-6d46-47af-a14a-65f6501c10f0-auth-proxy-config\") pod \"machine-approver-5c6485487f-r4mv6\" (UID: \"cdcd27a4-6d46-47af-a14a-65f6501c10f0\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 09:03:40.765556 master-0 kubenswrapper[26053]: I0318 09:03:40.765504 26053 request.go:700] Waited for 2.004273186s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Mar 18 09:03:40.766802 master-0 kubenswrapper[26053]: I0318 09:03:40.766760 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 09:03:40.787141 master-0 kubenswrapper[26053]: I0318 09:03:40.787061 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 09:03:40.807891 master-0 kubenswrapper[26053]: I0318 09:03:40.807701 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 09:03:40.814885 master-0 kubenswrapper[26053]: I0318 09:03:40.814842 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdcd27a4-6d46-47af-a14a-65f6501c10f0-config\") pod \"machine-approver-5c6485487f-r4mv6\" (UID: \"cdcd27a4-6d46-47af-a14a-65f6501c10f0\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 09:03:40.827584 master-0 kubenswrapper[26053]: I0318 09:03:40.827511 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-l4xp6" Mar 18 09:03:40.847421 master-0 kubenswrapper[26053]: I0318 09:03:40.847361 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 18 09:03:40.856513 master-0 kubenswrapper[26053]: I0318 09:03:40.856469 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/14489ef7-8df3-4a3b-a137-3a78e89d425b-certs\") pod \"machine-config-server-rw7hw\" (UID: \"14489ef7-8df3-4a3b-a137-3a78e89d425b\") " pod="openshift-machine-config-operator/machine-config-server-rw7hw" Mar 18 09:03:40.868872 master-0 kubenswrapper[26053]: I0318 09:03:40.868826 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-hbb9q" Mar 18 09:03:40.887975 master-0 kubenswrapper[26053]: I0318 09:03:40.887931 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 09:03:40.894111 master-0 kubenswrapper[26053]: I0318 09:03:40.894077 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 09:03:40.906988 master-0 kubenswrapper[26053]: I0318 09:03:40.906955 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 09:03:40.916346 master-0 kubenswrapper[26053]: I0318 09:03:40.916317 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 09:03:40.926702 master-0 kubenswrapper[26053]: I0318 09:03:40.926672 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 09:03:40.935272 master-0 kubenswrapper[26053]: I0318 09:03:40.935198 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 09:03:40.946830 master-0 kubenswrapper[26053]: I0318 09:03:40.946774 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 09:03:40.967916 master-0 kubenswrapper[26053]: I0318 09:03:40.967867 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 09:03:40.988144 master-0 kubenswrapper[26053]: I0318 09:03:40.988079 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-nvh22" Mar 18 09:03:41.007432 master-0 kubenswrapper[26053]: I0318 09:03:41.007364 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 18 09:03:41.017126 master-0 kubenswrapper[26053]: I0318 09:03:41.017071 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/14489ef7-8df3-4a3b-a137-3a78e89d425b-node-bootstrap-token\") pod \"machine-config-server-rw7hw\" (UID: \"14489ef7-8df3-4a3b-a137-3a78e89d425b\") " pod="openshift-machine-config-operator/machine-config-server-rw7hw" Mar 18 09:03:41.027922 master-0 kubenswrapper[26053]: I0318 09:03:41.027849 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 18 09:03:41.034101 master-0 kubenswrapper[26053]: I0318 09:03:41.034050 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/8683c8c6-3a77-4b46-8898-142f9781b49c-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-rqgh5\" (UID: \"8683c8c6-3a77-4b46-8898-142f9781b49c\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 09:03:41.047291 master-0 kubenswrapper[26053]: I0318 09:03:41.047246 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 18 09:03:41.055172 master-0 kubenswrapper[26053]: I0318 09:03:41.055126 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/8683c8c6-3a77-4b46-8898-142f9781b49c-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-rqgh5\" (UID: \"8683c8c6-3a77-4b46-8898-142f9781b49c\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 09:03:41.067274 master-0 kubenswrapper[26053]: I0318 09:03:41.067192 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-mbtdj" Mar 18 09:03:41.087280 master-0 kubenswrapper[26053]: I0318 09:03:41.087201 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 18 09:03:41.095453 master-0 kubenswrapper[26053]: I0318 09:03:41.095388 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 09:03:41.095453 master-0 kubenswrapper[26053]: I0318 09:03:41.095415 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8683c8c6-3a77-4b46-8898-142f9781b49c-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-rqgh5\" (UID: \"8683c8c6-3a77-4b46-8898-142f9781b49c\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 09:03:41.095453 master-0 kubenswrapper[26053]: I0318 09:03:41.095400 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/599418d3-6afa-46ab-9afa-659134f7ac94-metrics-client-ca\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 09:03:41.095826 master-0 kubenswrapper[26053]: I0318 09:03:41.095504 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2b59dbf5-0a61-4981-aed3-e73550615c4a-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 09:03:41.108482 master-0 kubenswrapper[26053]: I0318 09:03:41.108415 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 18 09:03:41.117298 master-0 kubenswrapper[26053]: I0318 09:03:41.117044 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d7205eeb-912b-4c31-b08f-ed0b2a1319aa-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-prrnd\" (UID: \"d7205eeb-912b-4c31-b08f-ed0b2a1319aa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" Mar 18 09:03:41.127242 master-0 kubenswrapper[26053]: I0318 09:03:41.127183 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 18 09:03:41.134011 master-0 kubenswrapper[26053]: I0318 09:03:41.133956 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 09:03:41.148376 master-0 kubenswrapper[26053]: I0318 09:03:41.148318 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 18 09:03:41.156463 master-0 kubenswrapper[26053]: I0318 09:03:41.156418 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 09:03:41.166871 master-0 kubenswrapper[26053]: I0318 09:03:41.166823 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-m2754" Mar 18 09:03:41.187985 master-0 kubenswrapper[26053]: I0318 09:03:41.187912 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 18 09:03:41.195747 master-0 kubenswrapper[26053]: I0318 09:03:41.195705 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 09:03:41.207854 master-0 kubenswrapper[26053]: I0318 09:03:41.207795 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 18 09:03:41.215846 master-0 kubenswrapper[26053]: I0318 09:03:41.215790 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-tls\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 09:03:41.228021 master-0 kubenswrapper[26053]: I0318 09:03:41.227957 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 18 09:03:41.236381 master-0 kubenswrapper[26053]: I0318 09:03:41.236327 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/599418d3-6afa-46ab-9afa-659134f7ac94-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 09:03:41.247381 master-0 kubenswrapper[26053]: I0318 09:03:41.247326 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-8zgz4" Mar 18 09:03:41.267794 master-0 kubenswrapper[26053]: I0318 09:03:41.267703 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 09:03:41.286946 master-0 kubenswrapper[26053]: I0318 09:03:41.286875 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 09:03:41.295681 master-0 kubenswrapper[26053]: I0318 09:03:41.295619 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b7ac7ef-060f-45d2-8988-006d45402e00-serving-cert\") pod \"route-controller-manager-7dbcb47f86-ptccg\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 09:03:41.307486 master-0 kubenswrapper[26053]: I0318 09:03:41.307440 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 09:03:41.327260 master-0 kubenswrapper[26053]: I0318 09:03:41.327137 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 09:03:41.336535 master-0 kubenswrapper[26053]: I0318 09:03:41.336464 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b7ac7ef-060f-45d2-8988-006d45402e00-config\") pod \"route-controller-manager-7dbcb47f86-ptccg\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 09:03:41.347471 master-0 kubenswrapper[26053]: I0318 09:03:41.347409 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 09:03:41.354312 master-0 kubenswrapper[26053]: I0318 09:03:41.354253 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b7ac7ef-060f-45d2-8988-006d45402e00-client-ca\") pod \"route-controller-manager-7dbcb47f86-ptccg\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 09:03:41.367338 master-0 kubenswrapper[26053]: I0318 09:03:41.367295 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 18 09:03:41.376206 master-0 kubenswrapper[26053]: I0318 09:03:41.376088 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2b59dbf5-0a61-4981-aed3-e73550615c4a-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 09:03:41.388746 master-0 kubenswrapper[26053]: I0318 09:03:41.388697 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-29bbg" Mar 18 09:03:41.407139 master-0 kubenswrapper[26053]: I0318 09:03:41.407091 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 18 09:03:41.415164 master-0 kubenswrapper[26053]: I0318 09:03:41.415104 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-metrics-server-audit-profiles\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:41.427207 master-0 kubenswrapper[26053]: I0318 09:03:41.427166 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 09:03:41.443555 master-0 kubenswrapper[26053]: E0318 09:03:41.443495 26053 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:41.443772 master-0 kubenswrapper[26053]: E0318 09:03:41.443614 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-client-ca podName:6e869b45-8ca6-485f-8b6f-b2fad3b02efe nodeName:}" failed. No retries permitted until 2026-03-18 09:03:42.443594021 +0000 UTC m=+9.936945412 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-client-ca") pod "controller-manager-7d954fcfb-gpddv" (UID: "6e869b45-8ca6-485f-8b6f-b2fad3b02efe") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:41.443772 master-0 kubenswrapper[26053]: E0318 09:03:41.443674 26053 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:41.443772 master-0 kubenswrapper[26053]: E0318 09:03:41.443764 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-server-tls podName:87381a51-96e6-4e86-bdae-c8ac3fc7a039 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:42.443739914 +0000 UTC m=+9.937091305 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-server-tls") pod "metrics-server-7875f64c8-kmr8t" (UID: "87381a51-96e6-4e86-bdae-c8ac3fc7a039") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:41.444688 master-0 kubenswrapper[26053]: E0318 09:03:41.444650 26053 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:41.444778 master-0 kubenswrapper[26053]: E0318 09:03:41.444697 26053 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:41.444778 master-0 kubenswrapper[26053]: E0318 09:03:41.444715 26053 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:41.444904 master-0 kubenswrapper[26053]: E0318 09:03:41.444722 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-config podName:6e869b45-8ca6-485f-8b6f-b2fad3b02efe nodeName:}" failed. No retries permitted until 2026-03-18 09:03:42.444701689 +0000 UTC m=+9.938053150 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-config") pod "controller-manager-7d954fcfb-gpddv" (UID: "6e869b45-8ca6-485f-8b6f-b2fad3b02efe") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:41.444904 master-0 kubenswrapper[26053]: E0318 09:03:41.444803 26053 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:41.444904 master-0 kubenswrapper[26053]: E0318 09:03:41.444823 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd-cert podName:9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd nodeName:}" failed. No retries permitted until 2026-03-18 09:03:42.444797872 +0000 UTC m=+9.938149273 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd-cert") pod "ingress-canary-226gc" (UID: "9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:41.444904 master-0 kubenswrapper[26053]: E0318 09:03:41.444843 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-serving-cert podName:6e869b45-8ca6-485f-8b6f-b2fad3b02efe nodeName:}" failed. No retries permitted until 2026-03-18 09:03:42.444834513 +0000 UTC m=+9.938185904 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-serving-cert") pod "controller-manager-7d954fcfb-gpddv" (UID: "6e869b45-8ca6-485f-8b6f-b2fad3b02efe") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:41.444904 master-0 kubenswrapper[26053]: E0318 09:03:41.444859 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b59dbf5-0a61-4981-aed3-e73550615c4a-openshift-state-metrics-tls podName:2b59dbf5-0a61-4981-aed3-e73550615c4a nodeName:}" failed. No retries permitted until 2026-03-18 09:03:42.444851583 +0000 UTC m=+9.938202974 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/2b59dbf5-0a61-4981-aed3-e73550615c4a-openshift-state-metrics-tls") pod "openshift-state-metrics-5dc6c74576-rm78n" (UID: "2b59dbf5-0a61-4981-aed3-e73550615c4a") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:41.444904 master-0 kubenswrapper[26053]: E0318 09:03:41.444873 26053 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:41.445195 master-0 kubenswrapper[26053]: E0318 09:03:41.444933 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-proxy-ca-bundles podName:6e869b45-8ca6-485f-8b6f-b2fad3b02efe nodeName:}" failed. No retries permitted until 2026-03-18 09:03:42.444914985 +0000 UTC m=+9.938266466 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-proxy-ca-bundles") pod "controller-manager-7d954fcfb-gpddv" (UID: "6e869b45-8ca6-485f-8b6f-b2fad3b02efe") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:41.446027 master-0 kubenswrapper[26053]: E0318 09:03:41.445988 26053 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-2gj3dpncb7vk4: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:41.446134 master-0 kubenswrapper[26053]: E0318 09:03:41.446032 26053 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:41.446134 master-0 kubenswrapper[26053]: E0318 09:03:41.446044 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-client-ca-bundle podName:87381a51-96e6-4e86-bdae-c8ac3fc7a039 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:42.446030343 +0000 UTC m=+9.939381734 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-client-ca-bundle") pod "metrics-server-7875f64c8-kmr8t" (UID: "87381a51-96e6-4e86-bdae-c8ac3fc7a039") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:41.446134 master-0 kubenswrapper[26053]: E0318 09:03:41.446063 26053 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:41.446134 master-0 kubenswrapper[26053]: E0318 09:03:41.446081 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-configmap-kubelet-serving-ca-bundle podName:87381a51-96e6-4e86-bdae-c8ac3fc7a039 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:42.446067534 +0000 UTC m=+9.939418925 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-configmap-kubelet-serving-ca-bundle") pod "metrics-server-7875f64c8-kmr8t" (UID: "87381a51-96e6-4e86-bdae-c8ac3fc7a039") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:41.446134 master-0 kubenswrapper[26053]: E0318 09:03:41.446119 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-client-certs podName:87381a51-96e6-4e86-bdae-c8ac3fc7a039 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:42.446102705 +0000 UTC m=+9.939454176 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-client-certs") pod "metrics-server-7875f64c8-kmr8t" (UID: "87381a51-96e6-4e86-bdae-c8ac3fc7a039") : failed to sync secret cache: timed out waiting for the condition Mar 18 09:03:41.447341 master-0 kubenswrapper[26053]: I0318 09:03:41.447282 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-rwvl6" Mar 18 09:03:41.467775 master-0 kubenswrapper[26053]: I0318 09:03:41.467729 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 09:03:41.487805 master-0 kubenswrapper[26053]: I0318 09:03:41.487759 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 09:03:41.507292 master-0 kubenswrapper[26053]: I0318 09:03:41.507225 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 18 09:03:41.528864 master-0 kubenswrapper[26053]: I0318 09:03:41.528788 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 18 09:03:41.548058 master-0 kubenswrapper[26053]: I0318 09:03:41.547996 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 09:03:41.567802 master-0 kubenswrapper[26053]: I0318 09:03:41.567747 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-2gj3dpncb7vk4" Mar 18 09:03:41.588500 master-0 kubenswrapper[26053]: I0318 09:03:41.588340 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 18 09:03:41.620683 master-0 kubenswrapper[26053]: I0318 09:03:41.620554 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 09:03:41.628484 master-0 kubenswrapper[26053]: I0318 09:03:41.628431 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-9xv2f" Mar 18 09:03:41.647461 master-0 kubenswrapper[26053]: I0318 09:03:41.647402 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-s9qtf" Mar 18 09:03:41.668341 master-0 kubenswrapper[26053]: I0318 09:03:41.668241 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 18 09:03:41.687680 master-0 kubenswrapper[26053]: I0318 09:03:41.687629 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 18 09:03:41.708196 master-0 kubenswrapper[26053]: I0318 09:03:41.708105 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 18 09:03:41.727276 master-0 kubenswrapper[26053]: I0318 09:03:41.727189 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 18 09:03:41.747318 master-0 kubenswrapper[26053]: I0318 09:03:41.747254 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-pfhv7" Mar 18 09:03:41.766619 master-0 kubenswrapper[26053]: I0318 09:03:41.766507 26053 request.go:700] Waited for 2.984098725s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0 Mar 18 09:03:41.768478 master-0 kubenswrapper[26053]: I0318 09:03:41.768217 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 09:03:41.787971 master-0 kubenswrapper[26053]: I0318 09:03:41.787890 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 18 09:03:41.807475 master-0 kubenswrapper[26053]: I0318 09:03:41.807372 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-jw7t8" Mar 18 09:03:41.845056 master-0 kubenswrapper[26053]: E0318 09:03:41.844907 26053 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.068s" Mar 18 09:03:41.845296 master-0 kubenswrapper[26053]: I0318 09:03:41.845105 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-qkhnq"] Mar 18 09:03:41.846140 master-0 kubenswrapper[26053]: E0318 09:03:41.846080 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c9de07b-1ef1-4228-b310-1007d999dc7b" containerName="assisted-installer-controller" Mar 18 09:03:41.846140 master-0 kubenswrapper[26053]: I0318 09:03:41.846130 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c9de07b-1ef1-4228-b310-1007d999dc7b" containerName="assisted-installer-controller" Mar 18 09:03:41.846348 master-0 kubenswrapper[26053]: E0318 09:03:41.846173 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93298cb2-d669-49ea-92be-8891f07ab1c5" containerName="installer" Mar 18 09:03:41.846348 master-0 kubenswrapper[26053]: I0318 09:03:41.846189 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="93298cb2-d669-49ea-92be-8891f07ab1c5" containerName="installer" Mar 18 09:03:41.846348 master-0 kubenswrapper[26053]: E0318 09:03:41.846204 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 09:03:41.846348 master-0 kubenswrapper[26053]: I0318 09:03:41.846217 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 09:03:41.846348 master-0 kubenswrapper[26053]: E0318 09:03:41.846238 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3253d87f-ae48-42cf-950f-f508a9b82d0d" containerName="installer" Mar 18 09:03:41.846348 master-0 kubenswrapper[26053]: I0318 09:03:41.846253 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="3253d87f-ae48-42cf-950f-f508a9b82d0d" containerName="installer" Mar 18 09:03:41.846348 master-0 kubenswrapper[26053]: E0318 09:03:41.846270 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e4bcfe-d6ca-4799-9431-682673fe7380" containerName="installer" Mar 18 09:03:41.846348 master-0 kubenswrapper[26053]: I0318 09:03:41.846283 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e4bcfe-d6ca-4799-9431-682673fe7380" containerName="installer" Mar 18 09:03:41.846348 master-0 kubenswrapper[26053]: E0318 09:03:41.846298 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b830ff-8938-4f21-8977-c29a19c85afb" containerName="installer" Mar 18 09:03:41.846348 master-0 kubenswrapper[26053]: I0318 09:03:41.846311 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b830ff-8938-4f21-8977-c29a19c85afb" containerName="installer" Mar 18 09:03:41.846348 master-0 kubenswrapper[26053]: E0318 09:03:41.846325 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c393a935-1821-4742-b1bb-0ee52ada5434" containerName="installer" Mar 18 09:03:41.846348 master-0 kubenswrapper[26053]: I0318 09:03:41.846337 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="c393a935-1821-4742-b1bb-0ee52ada5434" containerName="installer" Mar 18 09:03:41.847067 master-0 kubenswrapper[26053]: E0318 09:03:41.846368 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 09:03:41.847067 master-0 kubenswrapper[26053]: I0318 09:03:41.846382 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 09:03:41.847067 master-0 kubenswrapper[26053]: E0318 09:03:41.846411 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ca7b84e-0aff-4526-948a-03492712ff8f" containerName="installer" Mar 18 09:03:41.847067 master-0 kubenswrapper[26053]: I0318 09:03:41.846426 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ca7b84e-0aff-4526-948a-03492712ff8f" containerName="installer" Mar 18 09:03:41.847067 master-0 kubenswrapper[26053]: E0318 09:03:41.846457 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c46fcf39-9167-4ec2-9d2c-0a622bc69d13" containerName="installer" Mar 18 09:03:41.847067 master-0 kubenswrapper[26053]: I0318 09:03:41.846471 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="c46fcf39-9167-4ec2-9d2c-0a622bc69d13" containerName="installer" Mar 18 09:03:41.847067 master-0 kubenswrapper[26053]: E0318 09:03:41.846490 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 09:03:41.847067 master-0 kubenswrapper[26053]: I0318 09:03:41.846503 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 09:03:41.847067 master-0 kubenswrapper[26053]: E0318 09:03:41.846525 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b75d3625-4131-465d-a8e2-4c42588c7630" containerName="installer" Mar 18 09:03:41.847067 master-0 kubenswrapper[26053]: I0318 09:03:41.846537 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="b75d3625-4131-465d-a8e2-4c42588c7630" containerName="installer" Mar 18 09:03:41.847067 master-0 kubenswrapper[26053]: I0318 09:03:41.846856 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c9de07b-1ef1-4228-b310-1007d999dc7b" containerName="assisted-installer-controller" Mar 18 09:03:41.847067 master-0 kubenswrapper[26053]: I0318 09:03:41.846892 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="c46fcf39-9167-4ec2-9d2c-0a622bc69d13" containerName="installer" Mar 18 09:03:41.847067 master-0 kubenswrapper[26053]: I0318 09:03:41.846921 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="b75d3625-4131-465d-a8e2-4c42588c7630" containerName="installer" Mar 18 09:03:41.847067 master-0 kubenswrapper[26053]: I0318 09:03:41.846940 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="08e4bcfe-d6ca-4799-9431-682673fe7380" containerName="installer" Mar 18 09:03:41.847067 master-0 kubenswrapper[26053]: I0318 09:03:41.846961 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 18 09:03:41.847067 master-0 kubenswrapper[26053]: I0318 09:03:41.846984 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="3253d87f-ae48-42cf-950f-f508a9b82d0d" containerName="installer" Mar 18 09:03:41.847067 master-0 kubenswrapper[26053]: I0318 09:03:41.847002 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="c393a935-1821-4742-b1bb-0ee52ada5434" containerName="installer" Mar 18 09:03:41.847067 master-0 kubenswrapper[26053]: I0318 09:03:41.847026 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 18 09:03:41.847067 master-0 kubenswrapper[26053]: I0318 09:03:41.847047 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 18 09:03:41.847067 master-0 kubenswrapper[26053]: I0318 09:03:41.847064 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="93298cb2-d669-49ea-92be-8891f07ab1c5" containerName="installer" Mar 18 09:03:41.847067 master-0 kubenswrapper[26053]: I0318 09:03:41.847086 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b830ff-8938-4f21-8977-c29a19c85afb" containerName="installer" Mar 18 09:03:41.848209 master-0 kubenswrapper[26053]: I0318 09:03:41.847111 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ca7b84e-0aff-4526-948a-03492712ff8f" containerName="installer" Mar 18 09:03:41.851028 master-0 kubenswrapper[26053]: I0318 09:03:41.850946 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" Mar 18 09:03:41.867440 master-0 kubenswrapper[26053]: I0318 09:03:41.867338 26053 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 18 09:03:41.870663 master-0 kubenswrapper[26053]: I0318 09:03:41.870607 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k22wv\" (UniqueName: \"kubernetes.io/projected/e88b021c-c810-4a68-aa48-d8666b52330e-kube-api-access-k22wv\") pod \"cluster-autoscaler-operator-866dc4744-tx2pv\" (UID: \"e88b021c-c810-4a68-aa48-d8666b52330e\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-tx2pv" Mar 18 09:03:41.889667 master-0 kubenswrapper[26053]: I0318 09:03:41.889617 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnfwv\" (UniqueName: \"kubernetes.io/projected/0f6a7f55-84bd-4ea5-8248-4cb565904c3b-kube-api-access-lnfwv\") pod \"openshift-controller-manager-operator-8c94f4649-2g6x9\" (UID: \"0f6a7f55-84bd-4ea5-8248-4cb565904c3b\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-2g6x9" Mar 18 09:03:41.902676 master-0 kubenswrapper[26053]: I0318 09:03:41.902627 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zx99\" (UniqueName: \"kubernetes.io/projected/c6176328-5931-405b-8519-8e4bc83bedfb-kube-api-access-5zx99\") pod \"migrator-8487694857-sbsqg\" (UID: \"c6176328-5931-405b-8519-8e4bc83bedfb\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-sbsqg" Mar 18 09:03:41.918491 master-0 kubenswrapper[26053]: I0318 09:03:41.918425 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnn98\" (UniqueName: \"kubernetes.io/projected/bef948b9-eef4-404b-9b49-6e4a2ceea73b-kube-api-access-mnn98\") pod \"machine-config-operator-84d549f6d5-vj84b\" (UID: \"bef948b9-eef4-404b-9b49-6e4a2ceea73b\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-vj84b" Mar 18 09:03:41.941822 master-0 kubenswrapper[26053]: I0318 09:03:41.941768 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqgbr\" (UniqueName: \"kubernetes.io/projected/2b59dbf5-0a61-4981-aed3-e73550615c4a-kube-api-access-nqgbr\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 09:03:41.961069 master-0 kubenswrapper[26053]: I0318 09:03:41.961018 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5svd\" (UniqueName: \"kubernetes.io/projected/af1fbcf2-d4de-4015-89fc-2565e855a04d-kube-api-access-r5svd\") pod \"multus-h7vq8\" (UID: \"af1fbcf2-d4de-4015-89fc-2565e855a04d\") " pod="openshift-multus/multus-h7vq8" Mar 18 09:03:41.980160 master-0 kubenswrapper[26053]: I0318 09:03:41.980079 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/47bfce36-23a9-4523-af40-dfeaaee7b671-ready\") pod \"cni-sysctl-allowlist-ds-qkhnq\" (UID: \"47bfce36-23a9-4523-af40-dfeaaee7b671\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" Mar 18 09:03:41.981990 master-0 kubenswrapper[26053]: I0318 09:03:41.981917 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/47bfce36-23a9-4523-af40-dfeaaee7b671-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-qkhnq\" (UID: \"47bfce36-23a9-4523-af40-dfeaaee7b671\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" Mar 18 09:03:41.982408 master-0 kubenswrapper[26053]: I0318 09:03:41.982351 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/47bfce36-23a9-4523-af40-dfeaaee7b671-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-qkhnq\" (UID: \"47bfce36-23a9-4523-af40-dfeaaee7b671\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" Mar 18 09:03:41.982756 master-0 kubenswrapper[26053]: I0318 09:03:41.982718 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4wvk\" (UniqueName: \"kubernetes.io/projected/47bfce36-23a9-4523-af40-dfeaaee7b671-kube-api-access-j4wvk\") pod \"cni-sysctl-allowlist-ds-qkhnq\" (UID: \"47bfce36-23a9-4523-af40-dfeaaee7b671\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" Mar 18 09:03:41.983832 master-0 kubenswrapper[26053]: I0318 09:03:41.983783 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqjsq\" (UniqueName: \"kubernetes.io/projected/c5e43736-33c3-4949-98ca-971332541d64-kube-api-access-sqjsq\") pod \"node-resolver-thqlt\" (UID: \"c5e43736-33c3-4949-98ca-971332541d64\") " pod="openshift-dns/node-resolver-thqlt" Mar 18 09:03:42.001345 master-0 kubenswrapper[26053]: I0318 09:03:42.001297 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77sfj\" (UniqueName: \"kubernetes.io/projected/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-kube-api-access-77sfj\") pod \"multus-admission-controller-5dbbb8b86f-25rbq\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 09:03:42.024546 master-0 kubenswrapper[26053]: I0318 09:03:42.024484 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5nwv\" (UniqueName: \"kubernetes.io/projected/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd-kube-api-access-j5nwv\") pod \"ingress-canary-226gc\" (UID: \"9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd\") " pod="openshift-ingress-canary/ingress-canary-226gc" Mar 18 09:03:42.041863 master-0 kubenswrapper[26053]: I0318 09:03:42.041812 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csfl2\" (UniqueName: \"kubernetes.io/projected/2a864188-ada6-4ec2-bf9f-72dab210f0ce-kube-api-access-csfl2\") pod \"cluster-storage-operator-7d87854d6-9f7lz\" (UID: \"2a864188-ada6-4ec2-bf9f-72dab210f0ce\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-9f7lz" Mar 18 09:03:42.062484 master-0 kubenswrapper[26053]: I0318 09:03:42.062429 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkx4s\" (UniqueName: \"kubernetes.io/projected/7b7ac7ef-060f-45d2-8988-006d45402e00-kube-api-access-qkx4s\") pod \"route-controller-manager-7dbcb47f86-ptccg\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 09:03:42.082884 master-0 kubenswrapper[26053]: I0318 09:03:42.082815 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm2rc\" (UniqueName: \"kubernetes.io/projected/c5c995cf-40a0-4cd6-87fa-96a522f7bc57-kube-api-access-rm2rc\") pod \"csi-snapshot-controller-operator-5f5d689c6b-lhcpp\" (UID: \"c5c995cf-40a0-4cd6-87fa-96a522f7bc57\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-lhcpp" Mar 18 09:03:42.084734 master-0 kubenswrapper[26053]: I0318 09:03:42.084692 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/47bfce36-23a9-4523-af40-dfeaaee7b671-ready\") pod \"cni-sysctl-allowlist-ds-qkhnq\" (UID: \"47bfce36-23a9-4523-af40-dfeaaee7b671\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" Mar 18 09:03:42.084922 master-0 kubenswrapper[26053]: I0318 09:03:42.084900 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/47bfce36-23a9-4523-af40-dfeaaee7b671-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-qkhnq\" (UID: \"47bfce36-23a9-4523-af40-dfeaaee7b671\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" Mar 18 09:03:42.085170 master-0 kubenswrapper[26053]: I0318 09:03:42.085128 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/47bfce36-23a9-4523-af40-dfeaaee7b671-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-qkhnq\" (UID: \"47bfce36-23a9-4523-af40-dfeaaee7b671\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" Mar 18 09:03:42.085273 master-0 kubenswrapper[26053]: I0318 09:03:42.085249 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/47bfce36-23a9-4523-af40-dfeaaee7b671-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-qkhnq\" (UID: \"47bfce36-23a9-4523-af40-dfeaaee7b671\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" Mar 18 09:03:42.085461 master-0 kubenswrapper[26053]: I0318 09:03:42.085429 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4wvk\" (UniqueName: \"kubernetes.io/projected/47bfce36-23a9-4523-af40-dfeaaee7b671-kube-api-access-j4wvk\") pod \"cni-sysctl-allowlist-ds-qkhnq\" (UID: \"47bfce36-23a9-4523-af40-dfeaaee7b671\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" Mar 18 09:03:42.085551 master-0 kubenswrapper[26053]: I0318 09:03:42.085477 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/47bfce36-23a9-4523-af40-dfeaaee7b671-ready\") pod \"cni-sysctl-allowlist-ds-qkhnq\" (UID: \"47bfce36-23a9-4523-af40-dfeaaee7b671\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" Mar 18 09:03:42.101599 master-0 kubenswrapper[26053]: I0318 09:03:42.101471 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzrxv\" (UniqueName: \"kubernetes.io/projected/fdb52116-9c55-4464-99c8-fc2e4559996b-kube-api-access-xzrxv\") pod \"machine-api-operator-6fbb6cf6f9-n4t2h\" (UID: \"fdb52116-9c55-4464-99c8-fc2e4559996b\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-n4t2h" Mar 18 09:03:42.122281 master-0 kubenswrapper[26053]: I0318 09:03:42.122215 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkw45\" (UniqueName: \"kubernetes.io/projected/2d0da6e3-3887-4361-8eae-e7447f9ff72c-kube-api-access-xkw45\") pod \"package-server-manager-7b95f86987-k6xp5\" (UID: \"2d0da6e3-3887-4361-8eae-e7447f9ff72c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 09:03:42.142505 master-0 kubenswrapper[26053]: I0318 09:03:42.142454 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z98qs\" (UniqueName: \"kubernetes.io/projected/3898c28b-69b0-46af-b085-37e12d7d80ba-kube-api-access-z98qs\") pod \"cluster-samples-operator-85f7577d78-xjbb5\" (UID: \"3898c28b-69b0-46af-b085-37e12d7d80ba\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-xjbb5" Mar 18 09:03:42.160644 master-0 kubenswrapper[26053]: I0318 09:03:42.160554 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65cff83a-8d8f-4e4f-96ef-99941c29ba53-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-pp4r9\" (UID: \"65cff83a-8d8f-4e4f-96ef-99941c29ba53\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-pp4r9" Mar 18 09:03:42.183721 master-0 kubenswrapper[26053]: I0318 09:03:42.183615 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhdc2\" (UniqueName: \"kubernetes.io/projected/a7cf2cff-ca67-4cc6-99e7-99478ab89af4-kube-api-access-vhdc2\") pod \"machine-config-daemon-rhm2f\" (UID: \"a7cf2cff-ca67-4cc6-99e7-99478ab89af4\") " pod="openshift-machine-config-operator/machine-config-daemon-rhm2f" Mar 18 09:03:42.203135 master-0 kubenswrapper[26053]: I0318 09:03:42.203087 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptdsp\" (UniqueName: \"kubernetes.io/projected/e86268c9-7a83-4ccb-979a-feff00cb4b3e-kube-api-access-ptdsp\") pod \"authentication-operator-5885bfd7f4-j75sc\" (UID: \"e86268c9-7a83-4ccb-979a-feff00cb4b3e\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-j75sc" Mar 18 09:03:42.222666 master-0 kubenswrapper[26053]: I0318 09:03:42.222602 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddsnb\" (UniqueName: \"kubernetes.io/projected/d7205eeb-912b-4c31-b08f-ed0b2a1319aa-kube-api-access-ddsnb\") pod \"machine-config-controller-b4f87c5b9-prrnd\" (UID: \"d7205eeb-912b-4c31-b08f-ed0b2a1319aa\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-prrnd" Mar 18 09:03:42.244397 master-0 kubenswrapper[26053]: I0318 09:03:42.244339 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwp9m\" (UniqueName: \"kubernetes.io/projected/4e919445-81d0-4663-8941-f596d8121305-kube-api-access-kwp9m\") pod \"csi-snapshot-controller-64854d9cff-qnc62\" (UID: \"4e919445-81d0-4663-8941-f596d8121305\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" Mar 18 09:03:42.263326 master-0 kubenswrapper[26053]: I0318 09:03:42.263286 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw5zj\" (UniqueName: \"kubernetes.io/projected/800297fe-77fd-4f58-ade2-32a147cd7d5c-kube-api-access-tw5zj\") pod \"operator-controller-controller-manager-57777556ff-xfqsm\" (UID: \"800297fe-77fd-4f58-ade2-32a147cd7d5c\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 09:03:42.285351 master-0 kubenswrapper[26053]: I0318 09:03:42.285288 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lczj8\" (UniqueName: \"kubernetes.io/projected/a1f2b373-0c85-4028-9089-9e9dff5d37b5-kube-api-access-lczj8\") pod \"apiserver-77f845f574-2wpgz\" (UID: \"a1f2b373-0c85-4028-9089-9e9dff5d37b5\") " pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:42.309616 master-0 kubenswrapper[26053]: I0318 09:03:42.309402 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-774fx\" (UniqueName: \"kubernetes.io/projected/599418d3-6afa-46ab-9afa-659134f7ac94-kube-api-access-774fx\") pod \"node-exporter-kp8pg\" (UID: \"599418d3-6afa-46ab-9afa-659134f7ac94\") " pod="openshift-monitoring/node-exporter-kp8pg" Mar 18 09:03:42.334327 master-0 kubenswrapper[26053]: I0318 09:03:42.334244 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk4w7\" (UniqueName: \"kubernetes.io/projected/f198f770-5483-4499-abb6-06026f2c6b37-kube-api-access-sk4w7\") pod \"network-check-target-7r2q2\" (UID: \"f198f770-5483-4499-abb6-06026f2c6b37\") " pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 09:03:42.337796 master-0 kubenswrapper[26053]: I0318 09:03:42.337700 26053 scope.go:117] "RemoveContainer" containerID="97b6b0922d17ce30a0b9e74a3e377338947d2ced4f3ea98ad7676d4078ee6fa4" Mar 18 09:03:42.345544 master-0 kubenswrapper[26053]: I0318 09:03:42.345474 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkmb4\" (UniqueName: \"kubernetes.io/projected/1deb139f-1903-417e-835c-28abdd156cdb-kube-api-access-dkmb4\") pod \"cluster-node-tuning-operator-598fbc5f8f-9s8lp\" (UID: \"1deb139f-1903-417e-835c-28abdd156cdb\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-9s8lp" Mar 18 09:03:42.371744 master-0 kubenswrapper[26053]: I0318 09:03:42.368735 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w4w9\" (UniqueName: \"kubernetes.io/projected/c00ee838-424f-482b-942f-08f0952a5ccd-kube-api-access-9w4w9\") pod \"olm-operator-5c9796789-twp27\" (UID: \"c00ee838-424f-482b-942f-08f0952a5ccd\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 09:03:42.400955 master-0 kubenswrapper[26053]: I0318 09:03:42.400008 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4jq4\" (UniqueName: \"kubernetes.io/projected/bf5fd4cc-959e-4878-82e9-b0f90dba6553-kube-api-access-r4jq4\") pod \"redhat-marketplace-2gpbt\" (UID: \"bf5fd4cc-959e-4878-82e9-b0f90dba6553\") " pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 09:03:42.416017 master-0 kubenswrapper[26053]: I0318 09:03:42.415942 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj95l\" (UniqueName: \"kubernetes.io/projected/eb8f3615-9e89-4b51-87a2-7d168c81adf3-kube-api-access-mj95l\") pod \"cluster-baremetal-operator-6f69995874-mcd6d\" (UID: \"eb8f3615-9e89-4b51-87a2-7d168c81adf3\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-mcd6d" Mar 18 09:03:42.425513 master-0 kubenswrapper[26053]: I0318 09:03:42.425456 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2mwd\" (UniqueName: \"kubernetes.io/projected/15798f4d-8bcc-4e24-bb18-8dff1f4edf59-kube-api-access-m2mwd\") pod \"kube-state-metrics-7bbc969446-nbkgf\" (UID: \"15798f4d-8bcc-4e24-bb18-8dff1f4edf59\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-nbkgf" Mar 18 09:03:42.445320 master-0 kubenswrapper[26053]: I0318 09:03:42.445162 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-257nx\" (UniqueName: \"kubernetes.io/projected/f6833a48-fccb-42bd-ac90-29f08d5bf7e8-kube-api-access-257nx\") pod \"catalog-operator-68f85b4d6c-hhn7l\" (UID: \"f6833a48-fccb-42bd-ac90-29f08d5bf7e8\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 09:03:42.463071 master-0 kubenswrapper[26053]: I0318 09:03:42.463000 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9mh7\" (UniqueName: \"kubernetes.io/projected/600c92a1-56c5-497b-a8f0-746830f4180e-kube-api-access-m9mh7\") pod \"iptables-alerter-vr4gq\" (UID: \"600c92a1-56c5-497b-a8f0-746830f4180e\") " pod="openshift-network-operator/iptables-alerter-vr4gq" Mar 18 09:03:42.482024 master-0 kubenswrapper[26053]: I0318 09:03:42.481927 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n76wp\" (UniqueName: \"kubernetes.io/projected/14489ef7-8df3-4a3b-a137-3a78e89d425b-kube-api-access-n76wp\") pod \"machine-config-server-rw7hw\" (UID: \"14489ef7-8df3-4a3b-a137-3a78e89d425b\") " pod="openshift-machine-config-operator/machine-config-server-rw7hw" Mar 18 09:03:42.498942 master-0 kubenswrapper[26053]: I0318 09:03:42.498867 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd-cert\") pod \"ingress-canary-226gc\" (UID: \"9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd\") " pod="openshift-ingress-canary/ingress-canary-226gc" Mar 18 09:03:42.499182 master-0 kubenswrapper[26053]: I0318 09:03:42.498977 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b59dbf5-0a61-4981-aed3-e73550615c4a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 09:03:42.499182 master-0 kubenswrapper[26053]: I0318 09:03:42.499032 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-client-ca\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 09:03:42.499354 master-0 kubenswrapper[26053]: I0318 09:03:42.499323 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-client-certs\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:42.499467 master-0 kubenswrapper[26053]: I0318 09:03:42.499423 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-client-ca-bundle\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:42.499540 master-0 kubenswrapper[26053]: I0318 09:03:42.499524 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-proxy-ca-bundles\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 09:03:42.499653 master-0 kubenswrapper[26053]: I0318 09:03:42.499622 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-server-tls\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:42.499729 master-0 kubenswrapper[26053]: I0318 09:03:42.499665 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-config\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 09:03:42.499729 master-0 kubenswrapper[26053]: I0318 09:03:42.499702 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-serving-cert\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 09:03:42.499859 master-0 kubenswrapper[26053]: I0318 09:03:42.499778 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:42.500212 master-0 kubenswrapper[26053]: I0318 09:03:42.500157 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:42.500535 master-0 kubenswrapper[26053]: I0318 09:03:42.500489 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd-cert\") pod \"ingress-canary-226gc\" (UID: \"9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd\") " pod="openshift-ingress-canary/ingress-canary-226gc" Mar 18 09:03:42.500907 master-0 kubenswrapper[26053]: I0318 09:03:42.500862 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/2b59dbf5-0a61-4981-aed3-e73550615c4a-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-rm78n\" (UID: \"2b59dbf5-0a61-4981-aed3-e73550615c4a\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-rm78n" Mar 18 09:03:42.501266 master-0 kubenswrapper[26053]: I0318 09:03:42.501224 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-proxy-ca-bundles\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 09:03:42.501337 master-0 kubenswrapper[26053]: I0318 09:03:42.501267 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-server-tls\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:42.501670 master-0 kubenswrapper[26053]: I0318 09:03:42.501619 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-client-ca\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 09:03:42.501741 master-0 kubenswrapper[26053]: I0318 09:03:42.501700 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-config\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 09:03:42.501822 master-0 kubenswrapper[26053]: I0318 09:03:42.501743 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-client-ca-bundle\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:42.501890 master-0 kubenswrapper[26053]: I0318 09:03:42.501845 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-serving-cert\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 09:03:42.501984 master-0 kubenswrapper[26053]: I0318 09:03:42.501935 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-client-certs\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:42.504237 master-0 kubenswrapper[26053]: I0318 09:03:42.503964 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brzfx\" (UniqueName: \"kubernetes.io/projected/87381a51-96e6-4e86-bdae-c8ac3fc7a039-kube-api-access-brzfx\") pod \"metrics-server-7875f64c8-kmr8t\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:42.530494 master-0 kubenswrapper[26053]: I0318 09:03:42.530424 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fbs4\" (UniqueName: \"kubernetes.io/projected/1c322813-b574-4b46-b760-208ccecd01a5-kube-api-access-9fbs4\") pod \"community-operators-nfdcz\" (UID: \"1c322813-b574-4b46-b760-208ccecd01a5\") " pod="openshift-marketplace/community-operators-nfdcz" Mar 18 09:03:42.541294 master-0 kubenswrapper[26053]: I0318 09:03:42.541238 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxshz\" (UniqueName: \"kubernetes.io/projected/cda44dd8-895a-4eab-bedc-83f38efa2482-kube-api-access-bxshz\") pod \"tuned-84qxz\" (UID: \"cda44dd8-895a-4eab-bedc-83f38efa2482\") " pod="openshift-cluster-node-tuning-operator/tuned-84qxz" Mar 18 09:03:42.567179 master-0 kubenswrapper[26053]: I0318 09:03:42.567116 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47cpd\" (UniqueName: \"kubernetes.io/projected/e48101ca-f356-45e3-93d7-4e17b8d8066c-kube-api-access-47cpd\") pod \"network-metrics-daemon-2xs9n\" (UID: \"e48101ca-f356-45e3-93d7-4e17b8d8066c\") " pod="openshift-multus/network-metrics-daemon-2xs9n" Mar 18 09:03:42.590470 master-0 kubenswrapper[26053]: I0318 09:03:42.590390 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkkcv\" (UniqueName: \"kubernetes.io/projected/81eefe1b-f683-4740-8fb0-0a5050f9b4a4-kube-api-access-qkkcv\") pod \"openshift-apiserver-operator-d65958b8-m8p9p\" (UID: \"81eefe1b-f683-4740-8fb0-0a5050f9b4a4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-m8p9p" Mar 18 09:03:42.603311 master-0 kubenswrapper[26053]: I0318 09:03:42.603246 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f9ba06c-7a6b-4f46-a747-80b0a0b58600-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-cpbdr\" (UID: \"0f9ba06c-7a6b-4f46-a747-80b0a0b58600\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-cpbdr" Mar 18 09:03:42.630757 master-0 kubenswrapper[26053]: I0318 09:03:42.630626 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd8zs\" (UniqueName: \"kubernetes.io/projected/17b1447b-1659-405b-81e0-21f0cf3e7a2c-kube-api-access-rd8zs\") pod \"network-check-source-b4bf74f6-7zvkl\" (UID: \"17b1447b-1659-405b-81e0-21f0cf3e7a2c\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-7zvkl" Mar 18 09:03:42.643340 master-0 kubenswrapper[26053]: I0318 09:03:42.643225 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp84d\" (UniqueName: \"kubernetes.io/projected/4192ea44-a38c-4b70-93c3-8070da2ffe2f-kube-api-access-gp84d\") pod \"dns-operator-9c5679d8f-2649q\" (UID: \"4192ea44-a38c-4b70-93c3-8070da2ffe2f\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-2649q" Mar 18 09:03:42.663071 master-0 kubenswrapper[26053]: I0318 09:03:42.663016 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjv4l\" (UniqueName: \"kubernetes.io/projected/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-kube-api-access-xjv4l\") pod \"controller-manager-7d954fcfb-gpddv\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 09:03:42.683684 master-0 kubenswrapper[26053]: I0318 09:03:42.683627 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmztj\" (UniqueName: \"kubernetes.io/projected/be2682e4-cb63-4102-a83e-ef28023e273a-kube-api-access-nmztj\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl\" (UID: \"be2682e4-cb63-4102-a83e-ef28023e273a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl" Mar 18 09:03:42.702406 master-0 kubenswrapper[26053]: I0318 09:03:42.702310 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4l97\" (UniqueName: \"kubernetes.io/projected/411d544f-e105-44f0-927a-f61406b3f070-kube-api-access-t4l97\") pod \"catalogd-controller-manager-6864dc98f7-vbxdw\" (UID: \"411d544f-e105-44f0-927a-f61406b3f070\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 09:03:42.722805 master-0 kubenswrapper[26053]: I0318 09:03:42.722746 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2plvj\" (UniqueName: \"kubernetes.io/projected/bf7a3329-a04c-4b58-9364-b907c00cbe08-kube-api-access-2plvj\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 09:03:42.743116 master-0 kubenswrapper[26053]: I0318 09:03:42.743030 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcfrf\" (UniqueName: \"kubernetes.io/projected/15b6612f-3a51-4a67-a566-8c520f85c6c2-kube-api-access-dcfrf\") pod \"apiserver-6ff67f5cc6-vg6s9\" (UID: \"15b6612f-3a51-4a67-a566-8c520f85c6c2\") " pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 09:03:42.760355 master-0 kubenswrapper[26053]: I0318 09:03:42.760284 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt64s\" (UniqueName: \"kubernetes.io/projected/93cb5ef1-e8f1-4d11-8c93-1abf24626176-kube-api-access-xt64s\") pod \"router-default-7dcf5569b5-sgsmn\" (UID: \"93cb5ef1-e8f1-4d11-8c93-1abf24626176\") " pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 09:03:42.793450 master-0 kubenswrapper[26053]: I0318 09:03:42.793326 26053 request.go:700] Waited for 3.895993077s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa/token Mar 18 09:03:42.796518 master-0 kubenswrapper[26053]: I0318 09:03:42.796277 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxgx6\" (UniqueName: \"kubernetes.io/projected/8b779ce3-07c4-45ca-b1ca-750c95ed3d0b-kube-api-access-wxgx6\") pod \"network-operator-7bd846bfc4-6rtpx\" (UID: \"8b779ce3-07c4-45ca-b1ca-750c95ed3d0b\") " pod="openshift-network-operator/network-operator-7bd846bfc4-6rtpx" Mar 18 09:03:42.807673 master-0 kubenswrapper[26053]: I0318 09:03:42.807627 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2af879e-1465-40bf-bf72-30c7e89386a3-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"e2af879e-1465-40bf-bf72-30c7e89386a3\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 18 09:03:42.819178 master-0 kubenswrapper[26053]: I0318 09:03:42.818954 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf7a3329-a04c-4b58-9364-b907c00cbe08-bound-sa-token\") pod \"ingress-operator-66b84d69b-4cxfh\" (UID: \"bf7a3329-a04c-4b58-9364-b907c00cbe08\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-4cxfh" Mar 18 09:03:42.837489 master-0 kubenswrapper[26053]: I0318 09:03:42.837444 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1df9560e-21f0-44fe-bb51-4bc0fde4a3ac-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-xlfrc\" (UID: \"1df9560e-21f0-44fe-bb51-4bc0fde4a3ac\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-xlfrc" Mar 18 09:03:42.860783 master-0 kubenswrapper[26053]: I0318 09:03:42.860721 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqfdm\" (UniqueName: \"kubernetes.io/projected/fdd2f1fd-1a94-4f4e-a275-b075f432f763-kube-api-access-fqfdm\") pod \"multus-additional-cni-plugins-68tmr\" (UID: \"fdd2f1fd-1a94-4f4e-a275-b075f432f763\") " pod="openshift-multus/multus-additional-cni-plugins-68tmr" Mar 18 09:03:42.879544 master-0 kubenswrapper[26053]: I0318 09:03:42.879477 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dn5k\" (UniqueName: \"kubernetes.io/projected/7cac1300-44c1-4a7d-8d14-efa9702ad9df-kube-api-access-7dn5k\") pod \"ovnkube-control-plane-57f769d897-j2fgr\" (UID: \"7cac1300-44c1-4a7d-8d14-efa9702ad9df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-j2fgr" Mar 18 09:03:42.900306 master-0 kubenswrapper[26053]: I0318 09:03:42.900193 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cc640bf-cb5f-4493-b47b-6ea6f524525e-kube-api-access\") pod \"cluster-version-operator-7d58488df-q58jp\" (UID: \"9cc640bf-cb5f-4493-b47b-6ea6f524525e\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-q58jp" Mar 18 09:03:42.920922 master-0 kubenswrapper[26053]: I0318 09:03:42.920865 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ggjn\" (UniqueName: \"kubernetes.io/projected/a0cd1cf7-be6f-4baf-8761-69c693476de9-kube-api-access-2ggjn\") pod \"cloud-credential-operator-744f9dbf77-9xqgw\" (UID: \"a0cd1cf7-be6f-4baf-8761-69c693476de9\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-9xqgw" Mar 18 09:03:42.940470 master-0 kubenswrapper[26053]: I0318 09:03:42.940330 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g42f4\" (UniqueName: \"kubernetes.io/projected/8683c8c6-3a77-4b46-8898-142f9781b49c-kube-api-access-g42f4\") pod \"prometheus-operator-6c8df6d4b-rqgh5\" (UID: \"8683c8c6-3a77-4b46-8898-142f9781b49c\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-rqgh5" Mar 18 09:03:42.958461 master-0 kubenswrapper[26053]: I0318 09:03:42.958390 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mkcq\" (UniqueName: \"kubernetes.io/projected/b2588f5c-327c-49cc-8cfb-0cce1ad758d5-kube-api-access-9mkcq\") pod \"dns-default-pj485\" (UID: \"b2588f5c-327c-49cc-8cfb-0cce1ad758d5\") " pod="openshift-dns/dns-default-pj485" Mar 18 09:03:42.977899 master-0 kubenswrapper[26053]: I0318 09:03:42.977831 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2mj5\" (UniqueName: \"kubernetes.io/projected/bb6ef4c4-bff3-4559-8e42-582bbd668b7c-kube-api-access-f2mj5\") pod \"etcd-operator-8544cbcf9c-f2nfl\" (UID: \"bb6ef4c4-bff3-4559-8e42-582bbd668b7c\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-f2nfl" Mar 18 09:03:42.998380 master-0 kubenswrapper[26053]: I0318 09:03:42.998316 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr9zx\" (UniqueName: \"kubernetes.io/projected/94e2a8f0-2c2e-43da-9fa9-69edfcd77830-kube-api-access-mr9zx\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vwqc4\" (UID: \"94e2a8f0-2c2e-43da-9fa9-69edfcd77830\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vwqc4" Mar 18 09:03:43.019409 master-0 kubenswrapper[26053]: I0318 09:03:43.019338 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rppm6\" (UniqueName: \"kubernetes.io/projected/6c56e1ac-8752-4e46-8692-93716087f0e0-kube-api-access-rppm6\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 09:03:43.040439 master-0 kubenswrapper[26053]: I0318 09:03:43.040364 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94zpt\" (UniqueName: \"kubernetes.io/projected/09269324-c908-474d-818f-5cd49406f1e2-kube-api-access-94zpt\") pod \"cluster-monitoring-operator-58845fbb57-8vfjr\" (UID: \"09269324-c908-474d-818f-5cd49406f1e2\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-8vfjr" Mar 18 09:03:43.058253 master-0 kubenswrapper[26053]: I0318 09:03:43.058190 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwnvl\" (UniqueName: \"kubernetes.io/projected/f2fcd92f-0a58-4c87-8213-715453486aca-kube-api-access-zwnvl\") pod \"certified-operators-5x8lj\" (UID: \"f2fcd92f-0a58-4c87-8213-715453486aca\") " pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 09:03:43.077079 master-0 kubenswrapper[26053]: I0318 09:03:43.077022 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfrbj\" (UniqueName: \"kubernetes.io/projected/cdcd27a4-6d46-47af-a14a-65f6501c10f0-kube-api-access-dfrbj\") pod \"machine-approver-5c6485487f-r4mv6\" (UID: \"cdcd27a4-6d46-47af-a14a-65f6501c10f0\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-r4mv6" Mar 18 09:03:43.085716 master-0 kubenswrapper[26053]: E0318 09:03:43.085678 26053 configmap.go:193] Couldn't get configMap openshift-multus/cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:43.085780 master-0 kubenswrapper[26053]: E0318 09:03:43.085763 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/47bfce36-23a9-4523-af40-dfeaaee7b671-cni-sysctl-allowlist podName:47bfce36-23a9-4523-af40-dfeaaee7b671 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:43.585744051 +0000 UTC m=+11.079095542 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/47bfce36-23a9-4523-af40-dfeaaee7b671-cni-sysctl-allowlist") pod "cni-sysctl-allowlist-ds-qkhnq" (UID: "47bfce36-23a9-4523-af40-dfeaaee7b671") : failed to sync configmap cache: timed out waiting for the condition Mar 18 09:03:43.099491 master-0 kubenswrapper[26053]: I0318 09:03:43.099429 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnzhn\" (UniqueName: \"kubernetes.io/projected/57affd8b-d1ce-40d2-b31e-7b18645ca7b6-kube-api-access-fnzhn\") pod \"network-node-identity-lf7kq\" (UID: \"57affd8b-d1ce-40d2-b31e-7b18645ca7b6\") " pod="openshift-network-node-identity/network-node-identity-lf7kq" Mar 18 09:03:43.119490 master-0 kubenswrapper[26053]: I0318 09:03:43.119397 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmv75\" (UniqueName: \"kubernetes.io/projected/b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd-kube-api-access-nmv75\") pod \"service-ca-operator-b865698dc-fhlfx\" (UID: \"b40ee8d1-83f1-4d5e-8a24-2c2dbd7edbdd\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-fhlfx" Mar 18 09:03:43.151228 master-0 kubenswrapper[26053]: I0318 09:03:43.151052 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6p7s\" (UniqueName: \"kubernetes.io/projected/f918d08d-df7c-4e8d-85ba-1c92d766db16-kube-api-access-l6p7s\") pod \"insights-operator-68bf6ff9d6-89rtc\" (UID: \"f918d08d-df7c-4e8d-85ba-1c92d766db16\") " pod="openshift-insights/insights-operator-68bf6ff9d6-89rtc" Mar 18 09:03:43.167694 master-0 kubenswrapper[26053]: I0318 09:03:43.167606 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx9dd\" (UniqueName: \"kubernetes.io/projected/ca9d4694-8675-47c5-819f-89bba9dcdc0f-kube-api-access-rx9dd\") pod \"marketplace-operator-89ccd998f-m862c\" (UID: \"ca9d4694-8675-47c5-819f-89bba9dcdc0f\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 09:03:43.183257 master-0 kubenswrapper[26053]: I0318 09:03:43.183186 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkfms\" (UniqueName: \"kubernetes.io/projected/680006ef-a955-491e-b6a3-1ca7fcc20165-kube-api-access-kkfms\") pod \"service-ca-79bc6b8d76-fhj95\" (UID: \"680006ef-a955-491e-b6a3-1ca7fcc20165\") " pod="openshift-service-ca/service-ca-79bc6b8d76-fhj95" Mar 18 09:03:43.211502 master-0 kubenswrapper[26053]: I0318 09:03:43.211394 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jndvw\" (UniqueName: \"kubernetes.io/projected/5f827195-f68d-4bd2-865b-a1f041a5c73e-kube-api-access-jndvw\") pod \"cluster-olm-operator-67dcd4998-6gj8k\" (UID: \"5f827195-f68d-4bd2-865b-a1f041a5c73e\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-6gj8k" Mar 18 09:03:43.234671 master-0 kubenswrapper[26053]: I0318 09:03:43.234614 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q4k8\" (UniqueName: \"kubernetes.io/projected/995ec82c-b593-416a-9287-6020a484855c-kube-api-access-4q4k8\") pod \"redhat-operators-4r6jd\" (UID: \"995ec82c-b593-416a-9287-6020a484855c\") " pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 09:03:43.245106 master-0 kubenswrapper[26053]: I0318 09:03:43.245012 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5cgw\" (UniqueName: \"kubernetes.io/projected/25781967-12ce-490e-94aa-9b9722f495da-kube-api-access-z5cgw\") pod \"control-plane-machine-set-operator-6f97756bc8-s98kp\" (UID: \"25781967-12ce-490e-94aa-9b9722f495da\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-s98kp" Mar 18 09:03:43.258974 master-0 kubenswrapper[26053]: I0318 09:03:43.258919 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c56e1ac-8752-4e46-8692-93716087f0e0-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-c4lgf\" (UID: \"6c56e1ac-8752-4e46-8692-93716087f0e0\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-c4lgf" Mar 18 09:03:43.291260 master-0 kubenswrapper[26053]: I0318 09:03:43.291189 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g97kq\" (UniqueName: \"kubernetes.io/projected/8dacdedc-c6ad-40d4-afdc-59a31be417fe-kube-api-access-g97kq\") pod \"ovnkube-node-6ff5l\" (UID: \"8dacdedc-c6ad-40d4-afdc-59a31be417fe\") " pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:43.303989 master-0 kubenswrapper[26053]: I0318 09:03:43.303939 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jcqf\" (UniqueName: \"kubernetes.io/projected/50a2c23f-26af-4c7f-8ea6-996bcfe173d0-kube-api-access-2jcqf\") pod \"packageserver-c8d87f55b-gsv6r\" (UID: \"50a2c23f-26af-4c7f-8ea6-996bcfe173d0\") " pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 09:03:43.322797 master-0 kubenswrapper[26053]: I0318 09:03:43.322746 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t9rq\" (UniqueName: \"kubernetes.io/projected/95143c61-6f91-4cd4-9411-31c2fb75d4d0-kube-api-access-8t9rq\") pod \"openshift-config-operator-95bf4f4d-whh6r\" (UID: \"95143c61-6f91-4cd4-9411-31c2fb75d4d0\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 09:03:43.342007 master-0 kubenswrapper[26053]: E0318 09:03:43.341947 26053 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 09:03:43.342007 master-0 kubenswrapper[26053]: E0318 09:03:43.341995 26053 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-retry-2-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 09:03:43.342205 master-0 kubenswrapper[26053]: E0318 09:03:43.342093 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access podName:c46fcf39-9167-4ec2-9d2c-0a622bc69d13 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:43.842062365 +0000 UTC m=+11.335413756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access") pod "installer-1-retry-2-master-0" (UID: "c46fcf39-9167-4ec2-9d2c-0a622bc69d13") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 09:03:43.355595 master-0 kubenswrapper[26053]: E0318 09:03:43.355473 26053 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:03:43.403693 master-0 kubenswrapper[26053]: I0318 09:03:43.403526 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qnc62_4e919445-81d0-4663-8941-f596d8121305/snapshot-controller/4.log" Mar 18 09:03:43.419903 master-0 kubenswrapper[26053]: E0318 09:03:43.419854 26053 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:43.462072 master-0 kubenswrapper[26053]: E0318 09:03:43.462009 26053 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-startup-monitor-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:03:43.462072 master-0 kubenswrapper[26053]: E0318 09:03:43.462048 26053 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 18 09:03:43.462438 master-0 kubenswrapper[26053]: E0318 09:03:43.462124 26053 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 18 09:03:43.462860 master-0 kubenswrapper[26053]: I0318 09:03:43.462816 26053 scope.go:117] "RemoveContainer" containerID="b98c563bab7682462c40e7da7e26ff18216a7a69aec7a61033377ca04547a6d0" Mar 18 09:03:43.494890 master-0 kubenswrapper[26053]: I0318 09:03:43.491959 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 18 09:03:43.508973 master-0 kubenswrapper[26053]: I0318 09:03:43.508914 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-n9j87" Mar 18 09:03:43.528599 master-0 kubenswrapper[26053]: E0318 09:03:43.528541 26053 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.681s" Mar 18 09:03:43.528599 master-0 kubenswrapper[26053]: I0318 09:03:43.528609 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 18 09:03:43.528859 master-0 kubenswrapper[26053]: I0318 09:03:43.528621 26053 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="c47ac101-a848-4f5e-a03d-3382567e0d85" Mar 18 09:03:43.528859 master-0 kubenswrapper[26053]: I0318 09:03:43.528639 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 18 09:03:43.528859 master-0 kubenswrapper[26053]: I0318 09:03:43.528646 26053 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="c47ac101-a848-4f5e-a03d-3382567e0d85" Mar 18 09:03:43.528859 master-0 kubenswrapper[26053]: I0318 09:03:43.528654 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-7875f64c8-kmr8t"] Mar 18 09:03:43.528859 master-0 kubenswrapper[26053]: I0318 09:03:43.528671 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ac3507630eeeca1ec26dca5ed036e3bb","Type":"ContainerDied","Data":"b98c563bab7682462c40e7da7e26ff18216a7a69aec7a61033377ca04547a6d0"} Mar 18 09:03:43.528859 master-0 kubenswrapper[26053]: I0318 09:03:43.528687 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 09:03:43.528859 master-0 kubenswrapper[26053]: I0318 09:03:43.528701 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-qnc62" event={"ID":"4e919445-81d0-4663-8941-f596d8121305","Type":"ContainerStarted","Data":"e07985856f1b55f4bb3fe73cd6a5d35a3fd38055f81b34ff616b590e698e2a00"} Mar 18 09:03:43.528859 master-0 kubenswrapper[26053]: I0318 09:03:43.528735 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 09:03:43.528859 master-0 kubenswrapper[26053]: I0318 09:03:43.528760 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:43.531252 master-0 kubenswrapper[26053]: I0318 09:03:43.530670 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" podUID="87381a51-96e6-4e86-bdae-c8ac3fc7a039" containerName="metrics-server" containerID="cri-o://81a151a3aa12b152f9071a9f499fc6c53ed0410a76702e645d7cd7db06bbf80b" gracePeriod=170 Mar 18 09:03:43.541396 master-0 kubenswrapper[26053]: I0318 09:03:43.541307 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:43.541396 master-0 kubenswrapper[26053]: I0318 09:03:43.541347 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-599f97d97f-6zmlx"] Mar 18 09:03:43.543591 master-0 kubenswrapper[26053]: I0318 09:03:43.543498 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-599f97d97f-6zmlx"] Mar 18 09:03:43.544003 master-0 kubenswrapper[26053]: I0318 09:03:43.543847 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:43.544003 master-0 kubenswrapper[26053]: I0318 09:03:43.543974 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 09:03:43.544107 master-0 kubenswrapper[26053]: I0318 09:03:43.544011 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:43.544107 master-0 kubenswrapper[26053]: I0318 09:03:43.544065 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:43.544189 master-0 kubenswrapper[26053]: I0318 09:03:43.543921 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:43.544939 master-0 kubenswrapper[26053]: I0318 09:03:43.544097 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 09:03:43.544939 master-0 kubenswrapper[26053]: I0318 09:03:43.544530 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 09:03:43.544939 master-0 kubenswrapper[26053]: I0318 09:03:43.544589 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 09:03:43.544939 master-0 kubenswrapper[26053]: I0318 09:03:43.544607 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:43.544939 master-0 kubenswrapper[26053]: I0318 09:03:43.544668 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 09:03:43.544939 master-0 kubenswrapper[26053]: I0318 09:03:43.544795 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 09:03:43.544939 master-0 kubenswrapper[26053]: I0318 09:03:43.544893 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 09:03:43.545182 master-0 kubenswrapper[26053]: I0318 09:03:43.545003 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-vbxdw" Mar 18 09:03:43.548262 master-0 kubenswrapper[26053]: I0318 09:03:43.547072 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 09:03:43.548262 master-0 kubenswrapper[26053]: I0318 09:03:43.547150 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 09:03:43.548262 master-0 kubenswrapper[26053]: I0318 09:03:43.547178 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 09:03:43.548262 master-0 kubenswrapper[26053]: I0318 09:03:43.547203 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 09:03:43.548262 master-0 kubenswrapper[26053]: I0318 09:03:43.547224 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-m862c" Mar 18 09:03:43.548262 master-0 kubenswrapper[26053]: I0318 09:03:43.547287 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 09:03:43.548262 master-0 kubenswrapper[26053]: I0318 09:03:43.547314 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-pj485" Mar 18 09:03:43.548262 master-0 kubenswrapper[26053]: I0318 09:03:43.547376 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 09:03:43.548262 master-0 kubenswrapper[26053]: I0318 09:03:43.547427 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:43.548262 master-0 kubenswrapper[26053]: I0318 09:03:43.547456 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 09:03:43.548262 master-0 kubenswrapper[26053]: I0318 09:03:43.547798 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 09:03:43.548863 master-0 kubenswrapper[26053]: I0318 09:03:43.548671 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-pj485" Mar 18 09:03:43.548863 master-0 kubenswrapper[26053]: I0318 09:03:43.548709 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 09:03:43.548863 master-0 kubenswrapper[26053]: I0318 09:03:43.548729 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 09:03:43.548863 master-0 kubenswrapper[26053]: I0318 09:03:43.548738 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:43.548863 master-0 kubenswrapper[26053]: I0318 09:03:43.548760 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 09:03:43.548863 master-0 kubenswrapper[26053]: I0318 09:03:43.548802 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:43.548863 master-0 kubenswrapper[26053]: I0318 09:03:43.548829 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:43.548863 master-0 kubenswrapper[26053]: I0318 09:03:43.548845 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-whh6r" Mar 18 09:03:43.550389 master-0 kubenswrapper[26053]: I0318 09:03:43.550232 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 09:03:43.558687 master-0 kubenswrapper[26053]: I0318 09:03:43.558437 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-c8d87f55b-gsv6r" Mar 18 09:03:43.564370 master-0 kubenswrapper[26053]: I0318 09:03:43.564334 26053 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 18 09:03:43.574929 master-0 kubenswrapper[26053]: I0318 09:03:43.574902 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:43.577801 master-0 kubenswrapper[26053]: I0318 09:03:43.576692 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4wvk\" (UniqueName: \"kubernetes.io/projected/47bfce36-23a9-4523-af40-dfeaaee7b671-kube-api-access-j4wvk\") pod \"cni-sysctl-allowlist-ds-qkhnq\" (UID: \"47bfce36-23a9-4523-af40-dfeaaee7b671\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" Mar 18 09:03:43.577801 master-0 kubenswrapper[26053]: I0318 09:03:43.577720 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:03:43.590940 master-0 kubenswrapper[26053]: I0318 09:03:43.589358 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-4lqimvakop077" Mar 18 09:03:43.616758 master-0 kubenswrapper[26053]: I0318 09:03:43.616543 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/876181ab-b5e4-4d9d-aae8-710a9e7ad213-secret-metrics-server-tls\") pod \"metrics-server-599f97d97f-6zmlx\" (UID: \"876181ab-b5e4-4d9d-aae8-710a9e7ad213\") " pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:43.616758 master-0 kubenswrapper[26053]: I0318 09:03:43.616655 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/876181ab-b5e4-4d9d-aae8-710a9e7ad213-audit-log\") pod \"metrics-server-599f97d97f-6zmlx\" (UID: \"876181ab-b5e4-4d9d-aae8-710a9e7ad213\") " pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:43.622157 master-0 kubenswrapper[26053]: I0318 09:03:43.622123 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/47bfce36-23a9-4523-af40-dfeaaee7b671-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-qkhnq\" (UID: \"47bfce36-23a9-4523-af40-dfeaaee7b671\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" Mar 18 09:03:43.622232 master-0 kubenswrapper[26053]: I0318 09:03:43.622188 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88qdh\" (UniqueName: \"kubernetes.io/projected/876181ab-b5e4-4d9d-aae8-710a9e7ad213-kube-api-access-88qdh\") pod \"metrics-server-599f97d97f-6zmlx\" (UID: \"876181ab-b5e4-4d9d-aae8-710a9e7ad213\") " pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:43.622916 master-0 kubenswrapper[26053]: I0318 09:03:43.622886 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/47bfce36-23a9-4523-af40-dfeaaee7b671-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-qkhnq\" (UID: \"47bfce36-23a9-4523-af40-dfeaaee7b671\") " pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" Mar 18 09:03:43.623938 master-0 kubenswrapper[26053]: I0318 09:03:43.623899 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/876181ab-b5e4-4d9d-aae8-710a9e7ad213-metrics-server-audit-profiles\") pod \"metrics-server-599f97d97f-6zmlx\" (UID: \"876181ab-b5e4-4d9d-aae8-710a9e7ad213\") " pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:43.624112 master-0 kubenswrapper[26053]: I0318 09:03:43.624074 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/876181ab-b5e4-4d9d-aae8-710a9e7ad213-secret-metrics-client-certs\") pod \"metrics-server-599f97d97f-6zmlx\" (UID: \"876181ab-b5e4-4d9d-aae8-710a9e7ad213\") " pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:43.624286 master-0 kubenswrapper[26053]: I0318 09:03:43.624255 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/876181ab-b5e4-4d9d-aae8-710a9e7ad213-client-ca-bundle\") pod \"metrics-server-599f97d97f-6zmlx\" (UID: \"876181ab-b5e4-4d9d-aae8-710a9e7ad213\") " pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:43.624360 master-0 kubenswrapper[26053]: I0318 09:03:43.624333 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/876181ab-b5e4-4d9d-aae8-710a9e7ad213-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-599f97d97f-6zmlx\" (UID: \"876181ab-b5e4-4d9d-aae8-710a9e7ad213\") " pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:43.685480 master-0 kubenswrapper[26053]: I0318 09:03:43.685324 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" Mar 18 09:03:43.726415 master-0 kubenswrapper[26053]: I0318 09:03:43.725776 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88qdh\" (UniqueName: \"kubernetes.io/projected/876181ab-b5e4-4d9d-aae8-710a9e7ad213-kube-api-access-88qdh\") pod \"metrics-server-599f97d97f-6zmlx\" (UID: \"876181ab-b5e4-4d9d-aae8-710a9e7ad213\") " pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:43.726415 master-0 kubenswrapper[26053]: I0318 09:03:43.725850 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/876181ab-b5e4-4d9d-aae8-710a9e7ad213-metrics-server-audit-profiles\") pod \"metrics-server-599f97d97f-6zmlx\" (UID: \"876181ab-b5e4-4d9d-aae8-710a9e7ad213\") " pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:43.726415 master-0 kubenswrapper[26053]: I0318 09:03:43.725878 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/876181ab-b5e4-4d9d-aae8-710a9e7ad213-secret-metrics-client-certs\") pod \"metrics-server-599f97d97f-6zmlx\" (UID: \"876181ab-b5e4-4d9d-aae8-710a9e7ad213\") " pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:43.726415 master-0 kubenswrapper[26053]: I0318 09:03:43.725915 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/876181ab-b5e4-4d9d-aae8-710a9e7ad213-client-ca-bundle\") pod \"metrics-server-599f97d97f-6zmlx\" (UID: \"876181ab-b5e4-4d9d-aae8-710a9e7ad213\") " pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:43.726415 master-0 kubenswrapper[26053]: I0318 09:03:43.725945 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/876181ab-b5e4-4d9d-aae8-710a9e7ad213-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-599f97d97f-6zmlx\" (UID: \"876181ab-b5e4-4d9d-aae8-710a9e7ad213\") " pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:43.733732 master-0 kubenswrapper[26053]: I0318 09:03:43.726937 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/876181ab-b5e4-4d9d-aae8-710a9e7ad213-secret-metrics-server-tls\") pod \"metrics-server-599f97d97f-6zmlx\" (UID: \"876181ab-b5e4-4d9d-aae8-710a9e7ad213\") " pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:43.733732 master-0 kubenswrapper[26053]: I0318 09:03:43.726981 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/876181ab-b5e4-4d9d-aae8-710a9e7ad213-audit-log\") pod \"metrics-server-599f97d97f-6zmlx\" (UID: \"876181ab-b5e4-4d9d-aae8-710a9e7ad213\") " pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:43.733732 master-0 kubenswrapper[26053]: I0318 09:03:43.727351 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/876181ab-b5e4-4d9d-aae8-710a9e7ad213-audit-log\") pod \"metrics-server-599f97d97f-6zmlx\" (UID: \"876181ab-b5e4-4d9d-aae8-710a9e7ad213\") " pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:43.733732 master-0 kubenswrapper[26053]: I0318 09:03:43.728215 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/876181ab-b5e4-4d9d-aae8-710a9e7ad213-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-599f97d97f-6zmlx\" (UID: \"876181ab-b5e4-4d9d-aae8-710a9e7ad213\") " pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:43.733732 master-0 kubenswrapper[26053]: I0318 09:03:43.730146 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/876181ab-b5e4-4d9d-aae8-710a9e7ad213-metrics-server-audit-profiles\") pod \"metrics-server-599f97d97f-6zmlx\" (UID: \"876181ab-b5e4-4d9d-aae8-710a9e7ad213\") " pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:43.733732 master-0 kubenswrapper[26053]: I0318 09:03:43.730643 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/876181ab-b5e4-4d9d-aae8-710a9e7ad213-secret-metrics-client-certs\") pod \"metrics-server-599f97d97f-6zmlx\" (UID: \"876181ab-b5e4-4d9d-aae8-710a9e7ad213\") " pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:43.740859 master-0 kubenswrapper[26053]: I0318 09:03:43.740794 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/876181ab-b5e4-4d9d-aae8-710a9e7ad213-secret-metrics-server-tls\") pod \"metrics-server-599f97d97f-6zmlx\" (UID: \"876181ab-b5e4-4d9d-aae8-710a9e7ad213\") " pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:43.759681 master-0 kubenswrapper[26053]: I0318 09:03:43.758623 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/876181ab-b5e4-4d9d-aae8-710a9e7ad213-client-ca-bundle\") pod \"metrics-server-599f97d97f-6zmlx\" (UID: \"876181ab-b5e4-4d9d-aae8-710a9e7ad213\") " pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:43.762477 master-0 kubenswrapper[26053]: I0318 09:03:43.762450 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88qdh\" (UniqueName: \"kubernetes.io/projected/876181ab-b5e4-4d9d-aae8-710a9e7ad213-kube-api-access-88qdh\") pod \"metrics-server-599f97d97f-6zmlx\" (UID: \"876181ab-b5e4-4d9d-aae8-710a9e7ad213\") " pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:43.863496 master-0 kubenswrapper[26053]: I0318 09:03:43.863337 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:43.930947 master-0 kubenswrapper[26053]: I0318 09:03:43.930840 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:03:43.931527 master-0 kubenswrapper[26053]: E0318 09:03:43.931481 26053 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 09:03:43.931613 master-0 kubenswrapper[26053]: E0318 09:03:43.931513 26053 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-retry-2-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 09:03:43.931681 master-0 kubenswrapper[26053]: E0318 09:03:43.931656 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access podName:c46fcf39-9167-4ec2-9d2c-0a622bc69d13 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:44.931632763 +0000 UTC m=+12.424984154 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access") pod "installer-1-retry-2-master-0" (UID: "c46fcf39-9167-4ec2-9d2c-0a622bc69d13") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 09:03:43.961062 master-0 kubenswrapper[26053]: I0318 09:03:43.961019 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-7dcf5569b5-sgsmn" Mar 18 09:03:44.284005 master-0 kubenswrapper[26053]: I0318 09:03:44.283958 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-599f97d97f-6zmlx"] Mar 18 09:03:44.295027 master-0 kubenswrapper[26053]: W0318 09:03:44.294948 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod876181ab_b5e4_4d9d_aae8_710a9e7ad213.slice/crio-96f9c8dc1b1a137a8eaab809a30fdaf73534790a6e46d9751a4255c5b283f4c0 WatchSource:0}: Error finding container 96f9c8dc1b1a137a8eaab809a30fdaf73534790a6e46d9751a4255c5b283f4c0: Status 404 returned error can't find the container with id 96f9c8dc1b1a137a8eaab809a30fdaf73534790a6e46d9751a4255c5b283f4c0 Mar 18 09:03:44.303950 master-0 kubenswrapper[26053]: I0318 09:03:44.303914 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nfdcz" Mar 18 09:03:44.377290 master-0 kubenswrapper[26053]: I0318 09:03:44.377197 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nfdcz" Mar 18 09:03:44.419859 master-0 kubenswrapper[26053]: I0318 09:03:44.419821 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_ac3507630eeeca1ec26dca5ed036e3bb/kube-apiserver-check-endpoints/0.log" Mar 18 09:03:44.423831 master-0 kubenswrapper[26053]: I0318 09:03:44.423799 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"ac3507630eeeca1ec26dca5ed036e3bb","Type":"ContainerStarted","Data":"92d35a6a717c4777bb6b537ec9a6f0bb847acc1aa38b242c3d1cac7bc90efd44"} Mar 18 09:03:44.425581 master-0 kubenswrapper[26053]: I0318 09:03:44.425513 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" event={"ID":"47bfce36-23a9-4523-af40-dfeaaee7b671","Type":"ContainerStarted","Data":"e551e3afc979e135d8e5fbbd918353c842b725337d08517353f68fc693dc7cd1"} Mar 18 09:03:44.425666 master-0 kubenswrapper[26053]: I0318 09:03:44.425596 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" event={"ID":"47bfce36-23a9-4523-af40-dfeaaee7b671","Type":"ContainerStarted","Data":"769f6a7e1832b9edaf00d364768854d8db13038252fd1a1b32d6fa56948828f0"} Mar 18 09:03:44.425799 master-0 kubenswrapper[26053]: I0318 09:03:44.425772 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" Mar 18 09:03:44.430363 master-0 kubenswrapper[26053]: I0318 09:03:44.430331 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:44.431766 master-0 kubenswrapper[26053]: I0318 09:03:44.431731 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" event={"ID":"876181ab-b5e4-4d9d-aae8-710a9e7ad213","Type":"ContainerStarted","Data":"96f9c8dc1b1a137a8eaab809a30fdaf73534790a6e46d9751a4255c5b283f4c0"} Mar 18 09:03:44.433521 master-0 kubenswrapper[26053]: I0318 09:03:44.433476 26053 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="a9391117-4261-4eba-b3f6-0d77562a8375" Mar 18 09:03:44.433603 master-0 kubenswrapper[26053]: I0318 09:03:44.433532 26053 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="a9391117-4261-4eba-b3f6-0d77562a8375" Mar 18 09:03:44.445922 master-0 kubenswrapper[26053]: I0318 09:03:44.445863 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:44.482078 master-0 kubenswrapper[26053]: I0318 09:03:44.482007 26053 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Mar 18 09:03:44.482542 master-0 kubenswrapper[26053]: I0318 09:03:44.482492 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 18 09:03:44.485377 master-0 kubenswrapper[26053]: I0318 09:03:44.485338 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 18 09:03:44.524116 master-0 kubenswrapper[26053]: I0318 09:03:44.524074 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 18 09:03:44.626396 master-0 kubenswrapper[26053]: I0318 09:03:44.626251 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 09:03:44.628185 master-0 kubenswrapper[26053]: I0318 09:03:44.628140 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-xfqsm" Mar 18 09:03:44.948213 master-0 kubenswrapper[26053]: I0318 09:03:44.948087 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:03:44.948376 master-0 kubenswrapper[26053]: E0318 09:03:44.948292 26053 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 09:03:44.948376 master-0 kubenswrapper[26053]: E0318 09:03:44.948324 26053 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-retry-2-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 09:03:44.948448 master-0 kubenswrapper[26053]: E0318 09:03:44.948385 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access podName:c46fcf39-9167-4ec2-9d2c-0a622bc69d13 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:46.948368232 +0000 UTC m=+14.441719603 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access") pod "installer-1-retry-2-master-0" (UID: "c46fcf39-9167-4ec2-9d2c-0a622bc69d13") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 09:03:45.406908 master-0 kubenswrapper[26053]: I0318 09:03:45.406799 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=1.406775276 podStartE2EDuration="1.406775276s" podCreationTimestamp="2026-03-18 09:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:03:45.404492488 +0000 UTC m=+12.897843879" watchObservedRunningTime="2026-03-18 09:03:45.406775276 +0000 UTC m=+12.900126677" Mar 18 09:03:45.440835 master-0 kubenswrapper[26053]: I0318 09:03:45.440754 26053 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 09:03:45.440835 master-0 kubenswrapper[26053]: I0318 09:03:45.440808 26053 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 09:03:45.440835 master-0 kubenswrapper[26053]: I0318 09:03:45.440794 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" event={"ID":"876181ab-b5e4-4d9d-aae8-710a9e7ad213","Type":"ContainerStarted","Data":"060d5a1261b800d909e193b0161053c6753252a1075602e19aeb1c921123c377"} Mar 18 09:03:45.454279 master-0 kubenswrapper[26053]: I0318 09:03:45.454223 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 18 09:03:45.468359 master-0 kubenswrapper[26053]: I0318 09:03:45.468293 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" Mar 18 09:03:45.472324 master-0 kubenswrapper[26053]: I0318 09:03:45.472248 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 18 09:03:46.110049 master-0 kubenswrapper[26053]: I0318 09:03:46.109970 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:46.223178 master-0 kubenswrapper[26053]: I0318 09:03:46.222647 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" podStartSLOduration=3.222617377 podStartE2EDuration="3.222617377s" podCreationTimestamp="2026-03-18 09:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:03:46.220904263 +0000 UTC m=+13.714255654" watchObservedRunningTime="2026-03-18 09:03:46.222617377 +0000 UTC m=+13.715968758" Mar 18 09:03:46.300268 master-0 kubenswrapper[26053]: I0318 09:03:46.300161 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" podStartSLOduration=6.300109261 podStartE2EDuration="6.300109261s" podCreationTimestamp="2026-03-18 09:03:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:03:46.299052543 +0000 UTC m=+13.792403924" watchObservedRunningTime="2026-03-18 09:03:46.300109261 +0000 UTC m=+13.793460642" Mar 18 09:03:46.388407 master-0 kubenswrapper[26053]: I0318 09:03:46.388263 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=3.388241748 podStartE2EDuration="3.388241748s" podCreationTimestamp="2026-03-18 09:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:03:46.387022587 +0000 UTC m=+13.880374018" watchObservedRunningTime="2026-03-18 09:03:46.388241748 +0000 UTC m=+13.881593129" Mar 18 09:03:46.448535 master-0 kubenswrapper[26053]: I0318 09:03:46.448477 26053 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 09:03:46.635595 master-0 kubenswrapper[26053]: I0318 09:03:46.634867 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:46.831503 master-0 kubenswrapper[26053]: I0318 09:03:46.831449 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:46.969552 master-0 kubenswrapper[26053]: I0318 09:03:46.969494 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 09:03:46.974013 master-0 kubenswrapper[26053]: I0318 09:03:46.973956 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-k6xp5" Mar 18 09:03:46.994782 master-0 kubenswrapper[26053]: E0318 09:03:46.994697 26053 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 09:03:46.994782 master-0 kubenswrapper[26053]: E0318 09:03:46.994772 26053 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-retry-2-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 09:03:46.995078 master-0 kubenswrapper[26053]: E0318 09:03:46.994833 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access podName:c46fcf39-9167-4ec2-9d2c-0a622bc69d13 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:50.994814684 +0000 UTC m=+18.488166065 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access") pod "installer-1-retry-2-master-0" (UID: "c46fcf39-9167-4ec2-9d2c-0a622bc69d13") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 09:03:46.995245 master-0 kubenswrapper[26053]: I0318 09:03:46.994553 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:03:47.178677 master-0 kubenswrapper[26053]: I0318 09:03:47.177438 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-qkhnq"] Mar 18 09:03:47.278118 master-0 kubenswrapper[26053]: I0318 09:03:47.278055 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 09:03:47.281682 master-0 kubenswrapper[26053]: I0318 09:03:47.281641 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hhn7l" Mar 18 09:03:47.345610 master-0 kubenswrapper[26053]: I0318 09:03:47.345475 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-77f845f574-2wpgz" Mar 18 09:03:47.465351 master-0 kubenswrapper[26053]: I0318 09:03:47.465301 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" podUID="47bfce36-23a9-4523-af40-dfeaaee7b671" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://e551e3afc979e135d8e5fbbd918353c842b725337d08517353f68fc693dc7cd1" gracePeriod=30 Mar 18 09:03:47.887943 master-0 kubenswrapper[26053]: I0318 09:03:47.887729 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 09:03:47.889466 master-0 kubenswrapper[26053]: I0318 09:03:47.889430 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 09:03:47.893212 master-0 kubenswrapper[26053]: I0318 09:03:47.893172 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-twp27" Mar 18 09:03:47.954976 master-0 kubenswrapper[26053]: I0318 09:03:47.954925 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-6ff67f5cc6-vg6s9" Mar 18 09:03:48.489239 master-0 kubenswrapper[26053]: I0318 09:03:48.489194 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nfdcz" Mar 18 09:03:48.525895 master-0 kubenswrapper[26053]: I0318 09:03:48.525859 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nfdcz" Mar 18 09:03:49.176653 master-0 kubenswrapper[26053]: I0318 09:03:49.176610 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:49.181754 master-0 kubenswrapper[26053]: I0318 09:03:49.181717 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:03:49.974190 master-0 kubenswrapper[26053]: I0318 09:03:49.974142 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 09:03:49.976049 master-0 kubenswrapper[26053]: I0318 09:03:49.976014 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-7r2q2" Mar 18 09:03:50.013200 master-0 kubenswrapper[26053]: I0318 09:03:50.013162 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:03:50.024827 master-0 kubenswrapper[26053]: I0318 09:03:50.024397 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 09:03:50.097548 master-0 kubenswrapper[26053]: I0318 09:03:50.096453 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4jrzp" Mar 18 09:03:50.103148 master-0 kubenswrapper[26053]: I0318 09:03:50.101826 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-4jrzp" Mar 18 09:03:51.051873 master-0 kubenswrapper[26053]: I0318 09:03:51.051828 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:03:51.052402 master-0 kubenswrapper[26053]: E0318 09:03:51.051995 26053 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 09:03:51.052503 master-0 kubenswrapper[26053]: E0318 09:03:51.052491 26053 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-retry-2-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 09:03:51.052654 master-0 kubenswrapper[26053]: E0318 09:03:51.052614 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access podName:c46fcf39-9167-4ec2-9d2c-0a622bc69d13 nodeName:}" failed. No retries permitted until 2026-03-18 09:03:59.052596503 +0000 UTC m=+26.545947874 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access") pod "installer-1-retry-2-master-0" (UID: "c46fcf39-9167-4ec2-9d2c-0a622bc69d13") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 09:03:51.426557 master-0 kubenswrapper[26053]: I0318 09:03:51.426440 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-hq2gr"] Mar 18 09:03:51.427653 master-0 kubenswrapper[26053]: I0318 09:03:51.427630 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-58c9f8fc64-hq2gr" Mar 18 09:03:51.427772 master-0 kubenswrapper[26053]: I0318 09:03:51.427740 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:51.429110 master-0 kubenswrapper[26053]: I0318 09:03:51.429094 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-tnmb8" Mar 18 09:03:51.443134 master-0 kubenswrapper[26053]: I0318 09:03:51.443079 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-hq2gr"] Mar 18 09:03:51.561120 master-0 kubenswrapper[26053]: I0318 09:03:51.560955 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvgbc\" (UniqueName: \"kubernetes.io/projected/2c108235-6537-4130-a858-6d38cd71e4fd-kube-api-access-nvgbc\") pod \"multus-admission-controller-58c9f8fc64-hq2gr\" (UID: \"2c108235-6537-4130-a858-6d38cd71e4fd\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-hq2gr" Mar 18 09:03:51.561120 master-0 kubenswrapper[26053]: I0318 09:03:51.561009 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2c108235-6537-4130-a858-6d38cd71e4fd-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-hq2gr\" (UID: \"2c108235-6537-4130-a858-6d38cd71e4fd\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-hq2gr" Mar 18 09:03:51.669863 master-0 kubenswrapper[26053]: I0318 09:03:51.669785 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvgbc\" (UniqueName: \"kubernetes.io/projected/2c108235-6537-4130-a858-6d38cd71e4fd-kube-api-access-nvgbc\") pod \"multus-admission-controller-58c9f8fc64-hq2gr\" (UID: \"2c108235-6537-4130-a858-6d38cd71e4fd\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-hq2gr" Mar 18 09:03:51.670231 master-0 kubenswrapper[26053]: I0318 09:03:51.670171 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2c108235-6537-4130-a858-6d38cd71e4fd-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-hq2gr\" (UID: \"2c108235-6537-4130-a858-6d38cd71e4fd\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-hq2gr" Mar 18 09:03:51.673411 master-0 kubenswrapper[26053]: I0318 09:03:51.673367 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2c108235-6537-4130-a858-6d38cd71e4fd-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-hq2gr\" (UID: \"2c108235-6537-4130-a858-6d38cd71e4fd\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-hq2gr" Mar 18 09:03:51.691529 master-0 kubenswrapper[26053]: I0318 09:03:51.691418 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvgbc\" (UniqueName: \"kubernetes.io/projected/2c108235-6537-4130-a858-6d38cd71e4fd-kube-api-access-nvgbc\") pod \"multus-admission-controller-58c9f8fc64-hq2gr\" (UID: \"2c108235-6537-4130-a858-6d38cd71e4fd\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-hq2gr" Mar 18 09:03:51.746506 master-0 kubenswrapper[26053]: I0318 09:03:51.746441 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-58c9f8fc64-hq2gr" Mar 18 09:03:51.945021 master-0 kubenswrapper[26053]: I0318 09:03:51.944839 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:03:51.951697 master-0 kubenswrapper[26053]: I0318 09:03:51.948318 26053 patch_prober.go:28] interesting pod/metrics-server-7875f64c8-kmr8t container/metrics-server namespace/openshift-monitoring: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 18 09:03:51.951697 master-0 kubenswrapper[26053]: [+]log ok Mar 18 09:03:51.951697 master-0 kubenswrapper[26053]: [+]poststarthook/max-in-flight-filter ok Mar 18 09:03:51.951697 master-0 kubenswrapper[26053]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 18 09:03:51.951697 master-0 kubenswrapper[26053]: [+]metric-storage-ready ok Mar 18 09:03:51.951697 master-0 kubenswrapper[26053]: [+]metric-informer-sync ok Mar 18 09:03:51.951697 master-0 kubenswrapper[26053]: [+]metadata-informer-sync ok Mar 18 09:03:51.951697 master-0 kubenswrapper[26053]: [-]shutdown failed: reason withheld Mar 18 09:03:51.951697 master-0 kubenswrapper[26053]: readyz check failed Mar 18 09:03:51.951697 master-0 kubenswrapper[26053]: I0318 09:03:51.948377 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" podUID="87381a51-96e6-4e86-bdae-c8ac3fc7a039" containerName="metrics-server" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 18 09:03:52.194099 master-0 kubenswrapper[26053]: I0318 09:03:52.194053 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-hq2gr"] Mar 18 09:03:52.211377 master-0 kubenswrapper[26053]: W0318 09:03:52.210830 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c108235_6537_4130_a858_6d38cd71e4fd.slice/crio-114affd5d3d4131ef0d992fae54af57365d2b70dc65df97413a7adac5957ad79 WatchSource:0}: Error finding container 114affd5d3d4131ef0d992fae54af57365d2b70dc65df97413a7adac5957ad79: Status 404 returned error can't find the container with id 114affd5d3d4131ef0d992fae54af57365d2b70dc65df97413a7adac5957ad79 Mar 18 09:03:52.498772 master-0 kubenswrapper[26053]: I0318 09:03:52.498629 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-hq2gr" event={"ID":"2c108235-6537-4130-a858-6d38cd71e4fd","Type":"ContainerStarted","Data":"c4577ab08fd2b60a1456805add2a1fc07fbfa16276dd991492b803f94b562c45"} Mar 18 09:03:52.498772 master-0 kubenswrapper[26053]: I0318 09:03:52.498698 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-hq2gr" event={"ID":"2c108235-6537-4130-a858-6d38cd71e4fd","Type":"ContainerStarted","Data":"114affd5d3d4131ef0d992fae54af57365d2b70dc65df97413a7adac5957ad79"} Mar 18 09:03:53.285029 master-0 kubenswrapper[26053]: I0318 09:03:53.284965 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4r6jd" Mar 18 09:03:53.299438 master-0 kubenswrapper[26053]: I0318 09:03:53.299394 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5x8lj" Mar 18 09:03:53.519687 master-0 kubenswrapper[26053]: I0318 09:03:53.519260 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-hq2gr" event={"ID":"2c108235-6537-4130-a858-6d38cd71e4fd","Type":"ContainerStarted","Data":"eaeb3a2882b929350273327dbe6be2b2f780b8102c4052f602cd0e7837308f1e"} Mar 18 09:03:53.610295 master-0 kubenswrapper[26053]: I0318 09:03:53.610122 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-58c9f8fc64-hq2gr" podStartSLOduration=2.610100794 podStartE2EDuration="2.610100794s" podCreationTimestamp="2026-03-18 09:03:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:03:53.54661712 +0000 UTC m=+21.039968511" watchObservedRunningTime="2026-03-18 09:03:53.610100794 +0000 UTC m=+21.103452175" Mar 18 09:03:53.617219 master-0 kubenswrapper[26053]: I0318 09:03:53.614316 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq"] Mar 18 09:03:53.617219 master-0 kubenswrapper[26053]: I0318 09:03:53.614647 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" podUID="7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac" containerName="multus-admission-controller" containerID="cri-o://306577c0e1e759db97ccf74f1b2f6fc8127008236779e60afbca9657add3ab8b" gracePeriod=30 Mar 18 09:03:53.617219 master-0 kubenswrapper[26053]: I0318 09:03:53.615157 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" podUID="7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac" containerName="kube-rbac-proxy" containerID="cri-o://47ca95dc496d48ed1be652a66a1b8953731764f76ff2c9029488dd76430f4b5f" gracePeriod=30 Mar 18 09:03:53.654226 master-0 kubenswrapper[26053]: I0318 09:03:53.652984 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-mtpcs"] Mar 18 09:03:53.654226 master-0 kubenswrapper[26053]: I0318 09:03:53.653731 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-76b6568d85-mtpcs" Mar 18 09:03:53.662069 master-0 kubenswrapper[26053]: I0318 09:03:53.661997 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 18 09:03:53.662159 master-0 kubenswrapper[26053]: I0318 09:03:53.662121 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-mbgdl" Mar 18 09:03:53.662412 master-0 kubenswrapper[26053]: I0318 09:03:53.662385 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 18 09:03:53.662525 master-0 kubenswrapper[26053]: I0318 09:03:53.662502 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 18 09:03:53.662645 master-0 kubenswrapper[26053]: I0318 09:03:53.662626 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 18 09:03:53.671731 master-0 kubenswrapper[26053]: I0318 09:03:53.671215 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 18 09:03:53.688329 master-0 kubenswrapper[26053]: E0318 09:03:53.688217 26053 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e551e3afc979e135d8e5fbbd918353c842b725337d08517353f68fc693dc7cd1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 09:03:53.694417 master-0 kubenswrapper[26053]: I0318 09:03:53.694367 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-mtpcs"] Mar 18 09:03:53.702158 master-0 kubenswrapper[26053]: E0318 09:03:53.700832 26053 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e551e3afc979e135d8e5fbbd918353c842b725337d08517353f68fc693dc7cd1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 09:03:53.702436 master-0 kubenswrapper[26053]: E0318 09:03:53.702396 26053 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e551e3afc979e135d8e5fbbd918353c842b725337d08517353f68fc693dc7cd1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 09:03:53.702479 master-0 kubenswrapper[26053]: E0318 09:03:53.702436 26053 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" podUID="47bfce36-23a9-4523-af40-dfeaaee7b671" containerName="kube-multus-additional-cni-plugins" Mar 18 09:03:53.802036 master-0 kubenswrapper[26053]: I0318 09:03:53.801972 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv57k\" (UniqueName: \"kubernetes.io/projected/8dc1b108-349c-48ab-a6e5-5943067ced62-kube-api-access-pv57k\") pod \"console-operator-76b6568d85-mtpcs\" (UID: \"8dc1b108-349c-48ab-a6e5-5943067ced62\") " pod="openshift-console-operator/console-operator-76b6568d85-mtpcs" Mar 18 09:03:53.802036 master-0 kubenswrapper[26053]: I0318 09:03:53.802036 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dc1b108-349c-48ab-a6e5-5943067ced62-serving-cert\") pod \"console-operator-76b6568d85-mtpcs\" (UID: \"8dc1b108-349c-48ab-a6e5-5943067ced62\") " pod="openshift-console-operator/console-operator-76b6568d85-mtpcs" Mar 18 09:03:53.802438 master-0 kubenswrapper[26053]: I0318 09:03:53.802074 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8dc1b108-349c-48ab-a6e5-5943067ced62-trusted-ca\") pod \"console-operator-76b6568d85-mtpcs\" (UID: \"8dc1b108-349c-48ab-a6e5-5943067ced62\") " pod="openshift-console-operator/console-operator-76b6568d85-mtpcs" Mar 18 09:03:53.802438 master-0 kubenswrapper[26053]: I0318 09:03:53.802092 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc1b108-349c-48ab-a6e5-5943067ced62-config\") pod \"console-operator-76b6568d85-mtpcs\" (UID: \"8dc1b108-349c-48ab-a6e5-5943067ced62\") " pod="openshift-console-operator/console-operator-76b6568d85-mtpcs" Mar 18 09:03:53.902885 master-0 kubenswrapper[26053]: I0318 09:03:53.902744 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv57k\" (UniqueName: \"kubernetes.io/projected/8dc1b108-349c-48ab-a6e5-5943067ced62-kube-api-access-pv57k\") pod \"console-operator-76b6568d85-mtpcs\" (UID: \"8dc1b108-349c-48ab-a6e5-5943067ced62\") " pod="openshift-console-operator/console-operator-76b6568d85-mtpcs" Mar 18 09:03:53.903083 master-0 kubenswrapper[26053]: I0318 09:03:53.902961 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dc1b108-349c-48ab-a6e5-5943067ced62-serving-cert\") pod \"console-operator-76b6568d85-mtpcs\" (UID: \"8dc1b108-349c-48ab-a6e5-5943067ced62\") " pod="openshift-console-operator/console-operator-76b6568d85-mtpcs" Mar 18 09:03:53.903083 master-0 kubenswrapper[26053]: I0318 09:03:53.903081 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8dc1b108-349c-48ab-a6e5-5943067ced62-trusted-ca\") pod \"console-operator-76b6568d85-mtpcs\" (UID: \"8dc1b108-349c-48ab-a6e5-5943067ced62\") " pod="openshift-console-operator/console-operator-76b6568d85-mtpcs" Mar 18 09:03:53.903195 master-0 kubenswrapper[26053]: I0318 09:03:53.903108 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc1b108-349c-48ab-a6e5-5943067ced62-config\") pod \"console-operator-76b6568d85-mtpcs\" (UID: \"8dc1b108-349c-48ab-a6e5-5943067ced62\") " pod="openshift-console-operator/console-operator-76b6568d85-mtpcs" Mar 18 09:03:53.904168 master-0 kubenswrapper[26053]: I0318 09:03:53.904128 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc1b108-349c-48ab-a6e5-5943067ced62-config\") pod \"console-operator-76b6568d85-mtpcs\" (UID: \"8dc1b108-349c-48ab-a6e5-5943067ced62\") " pod="openshift-console-operator/console-operator-76b6568d85-mtpcs" Mar 18 09:03:53.904596 master-0 kubenswrapper[26053]: I0318 09:03:53.904545 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8dc1b108-349c-48ab-a6e5-5943067ced62-trusted-ca\") pod \"console-operator-76b6568d85-mtpcs\" (UID: \"8dc1b108-349c-48ab-a6e5-5943067ced62\") " pod="openshift-console-operator/console-operator-76b6568d85-mtpcs" Mar 18 09:03:53.912677 master-0 kubenswrapper[26053]: I0318 09:03:53.912579 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dc1b108-349c-48ab-a6e5-5943067ced62-serving-cert\") pod \"console-operator-76b6568d85-mtpcs\" (UID: \"8dc1b108-349c-48ab-a6e5-5943067ced62\") " pod="openshift-console-operator/console-operator-76b6568d85-mtpcs" Mar 18 09:03:53.917953 master-0 kubenswrapper[26053]: I0318 09:03:53.917918 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv57k\" (UniqueName: \"kubernetes.io/projected/8dc1b108-349c-48ab-a6e5-5943067ced62-kube-api-access-pv57k\") pod \"console-operator-76b6568d85-mtpcs\" (UID: \"8dc1b108-349c-48ab-a6e5-5943067ced62\") " pod="openshift-console-operator/console-operator-76b6568d85-mtpcs" Mar 18 09:03:54.013087 master-0 kubenswrapper[26053]: I0318 09:03:54.013028 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-76b6568d85-mtpcs" Mar 18 09:03:54.470251 master-0 kubenswrapper[26053]: I0318 09:03:54.470165 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-mtpcs"] Mar 18 09:03:54.476947 master-0 kubenswrapper[26053]: W0318 09:03:54.476896 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dc1b108_349c_48ab_a6e5_5943067ced62.slice/crio-47445c3bbc0ddb92e7741b51284e3ba23b990945a7b146d4742f7606ce2285a5 WatchSource:0}: Error finding container 47445c3bbc0ddb92e7741b51284e3ba23b990945a7b146d4742f7606ce2285a5: Status 404 returned error can't find the container with id 47445c3bbc0ddb92e7741b51284e3ba23b990945a7b146d4742f7606ce2285a5 Mar 18 09:03:54.479262 master-0 kubenswrapper[26053]: I0318 09:03:54.479230 26053 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 09:03:54.527163 master-0 kubenswrapper[26053]: I0318 09:03:54.527097 26053 generic.go:334] "Generic (PLEG): container finished" podID="7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac" containerID="47ca95dc496d48ed1be652a66a1b8953731764f76ff2c9029488dd76430f4b5f" exitCode=0 Mar 18 09:03:54.527395 master-0 kubenswrapper[26053]: I0318 09:03:54.527168 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" event={"ID":"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac","Type":"ContainerDied","Data":"47ca95dc496d48ed1be652a66a1b8953731764f76ff2c9029488dd76430f4b5f"} Mar 18 09:03:54.528393 master-0 kubenswrapper[26053]: I0318 09:03:54.528333 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-76b6568d85-mtpcs" event={"ID":"8dc1b108-349c-48ab-a6e5-5943067ced62","Type":"ContainerStarted","Data":"47445c3bbc0ddb92e7741b51284e3ba23b990945a7b146d4742f7606ce2285a5"} Mar 18 09:03:56.642856 master-0 kubenswrapper[26053]: I0318 09:03:56.642789 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:03:57.954964 master-0 kubenswrapper[26053]: I0318 09:03:57.954890 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 09:03:57.995165 master-0 kubenswrapper[26053]: I0318 09:03:57.995102 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2gpbt" Mar 18 09:03:58.631474 master-0 kubenswrapper[26053]: I0318 09:03:58.631398 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-7dfd446df6-76mgq"] Mar 18 09:03:58.640019 master-0 kubenswrapper[26053]: I0318 09:03:58.633792 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7dfd446df6-76mgq" Mar 18 09:03:58.642304 master-0 kubenswrapper[26053]: I0318 09:03:58.642218 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6mthr" Mar 18 09:03:58.642526 master-0 kubenswrapper[26053]: I0318 09:03:58.642407 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 18 09:03:58.714309 master-0 kubenswrapper[26053]: I0318 09:03:58.714245 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7dfd446df6-76mgq"] Mar 18 09:03:58.779648 master-0 kubenswrapper[26053]: I0318 09:03:58.779596 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a5751f72-30f7-439b-a1de-af588611984c-monitoring-plugin-cert\") pod \"monitoring-plugin-7dfd446df6-76mgq\" (UID: \"a5751f72-30f7-439b-a1de-af588611984c\") " pod="openshift-monitoring/monitoring-plugin-7dfd446df6-76mgq" Mar 18 09:03:58.881396 master-0 kubenswrapper[26053]: I0318 09:03:58.881325 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a5751f72-30f7-439b-a1de-af588611984c-monitoring-plugin-cert\") pod \"monitoring-plugin-7dfd446df6-76mgq\" (UID: \"a5751f72-30f7-439b-a1de-af588611984c\") " pod="openshift-monitoring/monitoring-plugin-7dfd446df6-76mgq" Mar 18 09:03:58.885147 master-0 kubenswrapper[26053]: I0318 09:03:58.885066 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/a5751f72-30f7-439b-a1de-af588611984c-monitoring-plugin-cert\") pod \"monitoring-plugin-7dfd446df6-76mgq\" (UID: \"a5751f72-30f7-439b-a1de-af588611984c\") " pod="openshift-monitoring/monitoring-plugin-7dfd446df6-76mgq" Mar 18 09:03:58.974637 master-0 kubenswrapper[26053]: I0318 09:03:58.974471 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-7dfd446df6-76mgq" Mar 18 09:03:59.083792 master-0 kubenswrapper[26053]: I0318 09:03:59.083711 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:03:59.084064 master-0 kubenswrapper[26053]: E0318 09:03:59.083999 26053 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 09:03:59.084064 master-0 kubenswrapper[26053]: E0318 09:03:59.084056 26053 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-retry-2-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 09:03:59.084218 master-0 kubenswrapper[26053]: E0318 09:03:59.084136 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access podName:c46fcf39-9167-4ec2-9d2c-0a622bc69d13 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:15.08411678 +0000 UTC m=+42.577468161 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access") pod "installer-1-retry-2-master-0" (UID: "c46fcf39-9167-4ec2-9d2c-0a622bc69d13") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 09:03:59.346480 master-0 kubenswrapper[26053]: I0318 09:03:59.346387 26053 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 09:03:59.346941 master-0 kubenswrapper[26053]: I0318 09:03:59.346681 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" containerID="cri-o://7c2aae6fa53257e6d8c7e1c783c29a93037db597eccbd9c6d53d330e1c671296" gracePeriod=30 Mar 18 09:03:59.347595 master-0 kubenswrapper[26053]: I0318 09:03:59.347503 26053 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 09:03:59.347897 master-0 kubenswrapper[26053]: E0318 09:03:59.347848 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 09:03:59.347897 master-0 kubenswrapper[26053]: I0318 09:03:59.347872 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 09:03:59.348136 master-0 kubenswrapper[26053]: I0318 09:03:59.348047 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 09:03:59.348136 master-0 kubenswrapper[26053]: I0318 09:03:59.348075 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 09:03:59.348317 master-0 kubenswrapper[26053]: E0318 09:03:59.348209 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 09:03:59.348317 master-0 kubenswrapper[26053]: I0318 09:03:59.348218 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 18 09:03:59.349227 master-0 kubenswrapper[26053]: I0318 09:03:59.349185 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:03:59.490295 master-0 kubenswrapper[26053]: I0318 09:03:59.490216 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:03:59.490295 master-0 kubenswrapper[26053]: I0318 09:03:59.490264 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:03:59.569315 master-0 kubenswrapper[26053]: I0318 09:03:59.569263 26053 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="7c2aae6fa53257e6d8c7e1c783c29a93037db597eccbd9c6d53d330e1c671296" exitCode=0 Mar 18 09:03:59.569484 master-0 kubenswrapper[26053]: I0318 09:03:59.569343 26053 scope.go:117] "RemoveContainer" containerID="6d079fc624d85c119aa8e55be99b13072fefe4d01c61b51b27950cfbdef8830f" Mar 18 09:03:59.592063 master-0 kubenswrapper[26053]: I0318 09:03:59.592017 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:03:59.592063 master-0 kubenswrapper[26053]: I0318 09:03:59.592058 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:03:59.592267 master-0 kubenswrapper[26053]: I0318 09:03:59.592132 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:03:59.592267 master-0 kubenswrapper[26053]: I0318 09:03:59.592175 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:03:59.598653 master-0 kubenswrapper[26053]: I0318 09:03:59.598199 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 09:03:59.598653 master-0 kubenswrapper[26053]: I0318 09:03:59.598276 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:03:59.958024 master-0 kubenswrapper[26053]: I0318 09:03:59.940513 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-7dfd446df6-76mgq"] Mar 18 09:03:59.958024 master-0 kubenswrapper[26053]: W0318 09:03:59.940886 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5751f72_30f7_439b_a1de_af588611984c.slice/crio-f90ca571f8ae4553fdfba5853d5ac6436d84df71c1d32efd0944e894bd445a6e WatchSource:0}: Error finding container f90ca571f8ae4553fdfba5853d5ac6436d84df71c1d32efd0944e894bd445a6e: Status 404 returned error can't find the container with id f90ca571f8ae4553fdfba5853d5ac6436d84df71c1d32efd0944e894bd445a6e Mar 18 09:04:00.254112 master-0 kubenswrapper[26053]: I0318 09:04:00.254066 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:04:00.404396 master-0 kubenswrapper[26053]: I0318 09:04:00.404245 26053 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="822ba66b-89de-4f44-aa7f-39706b0d8a46" Mar 18 09:04:00.405188 master-0 kubenswrapper[26053]: I0318 09:04:00.405134 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"c83737980b9ee109184b1d78e942cf36\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " Mar 18 09:04:00.405380 master-0 kubenswrapper[26053]: I0318 09:04:00.405347 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"c83737980b9ee109184b1d78e942cf36\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " Mar 18 09:04:00.405751 master-0 kubenswrapper[26053]: I0318 09:04:00.405519 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets" (OuterVolumeSpecName: "secrets") pod "c83737980b9ee109184b1d78e942cf36" (UID: "c83737980b9ee109184b1d78e942cf36"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:00.405952 master-0 kubenswrapper[26053]: I0318 09:04:00.405909 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs" (OuterVolumeSpecName: "logs") pod "c83737980b9ee109184b1d78e942cf36" (UID: "c83737980b9ee109184b1d78e942cf36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:00.406186 master-0 kubenswrapper[26053]: I0318 09:04:00.406130 26053 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:00.508133 master-0 kubenswrapper[26053]: I0318 09:04:00.508040 26053 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:00.579007 master-0 kubenswrapper[26053]: I0318 09:04:00.578917 26053 generic.go:334] "Generic (PLEG): container finished" podID="e2af879e-1465-40bf-bf72-30c7e89386a3" containerID="96f265b2997fc8f98bf93a3602e88baaf10a3bddac7d7468686ac08fed98ccb6" exitCode=0 Mar 18 09:04:00.579297 master-0 kubenswrapper[26053]: I0318 09:04:00.579051 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"e2af879e-1465-40bf-bf72-30c7e89386a3","Type":"ContainerDied","Data":"96f265b2997fc8f98bf93a3602e88baaf10a3bddac7d7468686ac08fed98ccb6"} Mar 18 09:04:00.580777 master-0 kubenswrapper[26053]: I0318 09:04:00.580742 26053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff2fe1000ac68ef028cd1879d1c2cf197302bdbc2feff6610b1fe10df6c0bb6d" Mar 18 09:04:00.580862 master-0 kubenswrapper[26053]: I0318 09:04:00.580769 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 18 09:04:00.590325 master-0 kubenswrapper[26053]: I0318 09:04:00.590256 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-76b6568d85-mtpcs" event={"ID":"8dc1b108-349c-48ab-a6e5-5943067ced62","Type":"ContainerStarted","Data":"dde160045725e1ec7a66df4b32cbe94a216d979ce2bac5c04dd47e07113c84ef"} Mar 18 09:04:00.590693 master-0 kubenswrapper[26053]: I0318 09:04:00.590675 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-76b6568d85-mtpcs" Mar 18 09:04:00.598793 master-0 kubenswrapper[26053]: I0318 09:04:00.598233 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7dfd446df6-76mgq" event={"ID":"a5751f72-30f7-439b-a1de-af588611984c","Type":"ContainerStarted","Data":"f90ca571f8ae4553fdfba5853d5ac6436d84df71c1d32efd0944e894bd445a6e"} Mar 18 09:04:00.605621 master-0 kubenswrapper[26053]: I0318 09:04:00.605485 26053 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="99a0327fcc44e8bda038947a1b4ee1b04fc65e4463d2a3b38bd274ec5429c287" exitCode=0 Mar 18 09:04:00.605621 master-0 kubenswrapper[26053]: I0318 09:04:00.605556 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerDied","Data":"99a0327fcc44e8bda038947a1b4ee1b04fc65e4463d2a3b38bd274ec5429c287"} Mar 18 09:04:00.605621 master-0 kubenswrapper[26053]: I0318 09:04:00.605607 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"a96c8ccb4b4c63e2c6383d12c097ddadbeda97be6f3bbb4836b69d805edcddd9"} Mar 18 09:04:00.614058 master-0 kubenswrapper[26053]: I0318 09:04:00.611608 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-76b6568d85-mtpcs" Mar 18 09:04:00.744980 master-0 kubenswrapper[26053]: I0318 09:04:00.744934 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c83737980b9ee109184b1d78e942cf36" path="/var/lib/kubelet/pods/c83737980b9ee109184b1d78e942cf36/volumes" Mar 18 09:04:00.745457 master-0 kubenswrapper[26053]: I0318 09:04:00.745290 26053 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Mar 18 09:04:01.267670 master-0 kubenswrapper[26053]: I0318 09:04:01.267509 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 09:04:01.267670 master-0 kubenswrapper[26053]: I0318 09:04:01.267550 26053 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="822ba66b-89de-4f44-aa7f-39706b0d8a46" Mar 18 09:04:01.280425 master-0 kubenswrapper[26053]: I0318 09:04:01.267990 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-76b6568d85-mtpcs" podStartSLOduration=3.341710622 podStartE2EDuration="8.267975047s" podCreationTimestamp="2026-03-18 09:03:53 +0000 UTC" firstStartedPulling="2026-03-18 09:03:54.479165102 +0000 UTC m=+21.972516483" lastFinishedPulling="2026-03-18 09:03:59.405429527 +0000 UTC m=+26.898780908" observedRunningTime="2026-03-18 09:04:01.215412585 +0000 UTC m=+28.708763966" watchObservedRunningTime="2026-03-18 09:04:01.267975047 +0000 UTC m=+28.761326428" Mar 18 09:04:01.280425 master-0 kubenswrapper[26053]: I0318 09:04:01.270243 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 18 09:04:01.280425 master-0 kubenswrapper[26053]: I0318 09:04:01.270261 26053 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="822ba66b-89de-4f44-aa7f-39706b0d8a46" Mar 18 09:04:01.615929 master-0 kubenswrapper[26053]: I0318 09:04:01.615855 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"3c4fbfc45d70efcd01458c6d886e63fef0992127787572398e33c8e3ab34c520"} Mar 18 09:04:01.615929 master-0 kubenswrapper[26053]: I0318 09:04:01.615900 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"fdc009359c2112107085b8f02c63abd75fd4e2176cf53ef7347b4a4fd00d9538"} Mar 18 09:04:02.631302 master-0 kubenswrapper[26053]: I0318 09:04:02.630778 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"0e945016e5ad3ca612d9c28d9263f33536983ae254a451bec2931d7083e04391"} Mar 18 09:04:03.063813 master-0 kubenswrapper[26053]: I0318 09:04:03.062299 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-66b8ffb895-bfrtz"] Mar 18 09:04:03.068030 master-0 kubenswrapper[26053]: I0318 09:04:03.067982 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-66b8ffb895-bfrtz" Mar 18 09:04:03.070464 master-0 kubenswrapper[26053]: I0318 09:04:03.070429 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-pj2bk" Mar 18 09:04:03.070708 master-0 kubenswrapper[26053]: I0318 09:04:03.070675 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 18 09:04:03.070876 master-0 kubenswrapper[26053]: I0318 09:04:03.070860 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 18 09:04:03.080096 master-0 kubenswrapper[26053]: I0318 09:04:03.078800 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-66b8ffb895-bfrtz"] Mar 18 09:04:03.090468 master-0 kubenswrapper[26053]: I0318 09:04:03.090421 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 18 09:04:03.162845 master-0 kubenswrapper[26053]: I0318 09:04:03.159819 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t5sr\" (UniqueName: \"kubernetes.io/projected/bbedaed5-a2a1-4853-8b60-0baf3d1b143d-kube-api-access-2t5sr\") pod \"downloads-66b8ffb895-bfrtz\" (UID: \"bbedaed5-a2a1-4853-8b60-0baf3d1b143d\") " pod="openshift-console/downloads-66b8ffb895-bfrtz" Mar 18 09:04:03.263558 master-0 kubenswrapper[26053]: I0318 09:04:03.263428 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2af879e-1465-40bf-bf72-30c7e89386a3-kube-api-access\") pod \"e2af879e-1465-40bf-bf72-30c7e89386a3\" (UID: \"e2af879e-1465-40bf-bf72-30c7e89386a3\") " Mar 18 09:04:03.263558 master-0 kubenswrapper[26053]: I0318 09:04:03.263536 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2af879e-1465-40bf-bf72-30c7e89386a3-var-lock\") pod \"e2af879e-1465-40bf-bf72-30c7e89386a3\" (UID: \"e2af879e-1465-40bf-bf72-30c7e89386a3\") " Mar 18 09:04:03.263558 master-0 kubenswrapper[26053]: I0318 09:04:03.263604 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2af879e-1465-40bf-bf72-30c7e89386a3-kubelet-dir\") pod \"e2af879e-1465-40bf-bf72-30c7e89386a3\" (UID: \"e2af879e-1465-40bf-bf72-30c7e89386a3\") " Mar 18 09:04:03.265078 master-0 kubenswrapper[26053]: I0318 09:04:03.264532 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2af879e-1465-40bf-bf72-30c7e89386a3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e2af879e-1465-40bf-bf72-30c7e89386a3" (UID: "e2af879e-1465-40bf-bf72-30c7e89386a3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:03.265078 master-0 kubenswrapper[26053]: I0318 09:04:03.264855 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t5sr\" (UniqueName: \"kubernetes.io/projected/bbedaed5-a2a1-4853-8b60-0baf3d1b143d-kube-api-access-2t5sr\") pod \"downloads-66b8ffb895-bfrtz\" (UID: \"bbedaed5-a2a1-4853-8b60-0baf3d1b143d\") " pod="openshift-console/downloads-66b8ffb895-bfrtz" Mar 18 09:04:03.265181 master-0 kubenswrapper[26053]: I0318 09:04:03.265104 26053 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e2af879e-1465-40bf-bf72-30c7e89386a3-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:03.265181 master-0 kubenswrapper[26053]: I0318 09:04:03.264868 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2af879e-1465-40bf-bf72-30c7e89386a3-var-lock" (OuterVolumeSpecName: "var-lock") pod "e2af879e-1465-40bf-bf72-30c7e89386a3" (UID: "e2af879e-1465-40bf-bf72-30c7e89386a3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:03.272895 master-0 kubenswrapper[26053]: I0318 09:04:03.272849 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2af879e-1465-40bf-bf72-30c7e89386a3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e2af879e-1465-40bf-bf72-30c7e89386a3" (UID: "e2af879e-1465-40bf-bf72-30c7e89386a3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:04:03.286534 master-0 kubenswrapper[26053]: I0318 09:04:03.286400 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t5sr\" (UniqueName: \"kubernetes.io/projected/bbedaed5-a2a1-4853-8b60-0baf3d1b143d-kube-api-access-2t5sr\") pod \"downloads-66b8ffb895-bfrtz\" (UID: \"bbedaed5-a2a1-4853-8b60-0baf3d1b143d\") " pod="openshift-console/downloads-66b8ffb895-bfrtz" Mar 18 09:04:03.366179 master-0 kubenswrapper[26053]: I0318 09:04:03.366107 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e2af879e-1465-40bf-bf72-30c7e89386a3-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:03.366179 master-0 kubenswrapper[26053]: I0318 09:04:03.366148 26053 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e2af879e-1465-40bf-bf72-30c7e89386a3-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:03.401993 master-0 kubenswrapper[26053]: I0318 09:04:03.401951 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-66b8ffb895-bfrtz" Mar 18 09:04:03.635993 master-0 kubenswrapper[26053]: I0318 09:04:03.635933 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"e2af879e-1465-40bf-bf72-30c7e89386a3","Type":"ContainerDied","Data":"0736cc0a2848f729264829f385cc50d1c800409917495f6cb40f4213a06e4f6a"} Mar 18 09:04:03.635993 master-0 kubenswrapper[26053]: I0318 09:04:03.635984 26053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0736cc0a2848f729264829f385cc50d1c800409917495f6cb40f4213a06e4f6a" Mar 18 09:04:03.635993 master-0 kubenswrapper[26053]: I0318 09:04:03.635959 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 18 09:04:03.637733 master-0 kubenswrapper[26053]: I0318 09:04:03.637668 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-7dfd446df6-76mgq" event={"ID":"a5751f72-30f7-439b-a1de-af588611984c","Type":"ContainerStarted","Data":"0e96875dceba82cc31dac231289284bde84d2e80f048da7eb5cfc22b94631364"} Mar 18 09:04:03.638011 master-0 kubenswrapper[26053]: I0318 09:04:03.637976 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-7dfd446df6-76mgq" Mar 18 09:04:03.638087 master-0 kubenswrapper[26053]: I0318 09:04:03.638035 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:04:03.648963 master-0 kubenswrapper[26053]: I0318 09:04:03.648920 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-7dfd446df6-76mgq" Mar 18 09:04:03.657725 master-0 kubenswrapper[26053]: I0318 09:04:03.657669 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-7dfd446df6-76mgq" podStartSLOduration=2.516509123 podStartE2EDuration="5.657627008s" podCreationTimestamp="2026-03-18 09:03:58 +0000 UTC" firstStartedPulling="2026-03-18 09:03:59.942624758 +0000 UTC m=+27.435976159" lastFinishedPulling="2026-03-18 09:04:03.083742663 +0000 UTC m=+30.577094044" observedRunningTime="2026-03-18 09:04:03.655107083 +0000 UTC m=+31.148458484" watchObservedRunningTime="2026-03-18 09:04:03.657627008 +0000 UTC m=+31.150978389" Mar 18 09:04:03.695592 master-0 kubenswrapper[26053]: I0318 09:04:03.695010 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=4.694995799 podStartE2EDuration="4.694995799s" podCreationTimestamp="2026-03-18 09:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:04:03.673600889 +0000 UTC m=+31.166952270" watchObservedRunningTime="2026-03-18 09:04:03.694995799 +0000 UTC m=+31.188347170" Mar 18 09:04:03.711595 master-0 kubenswrapper[26053]: E0318 09:04:03.709205 26053 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e551e3afc979e135d8e5fbbd918353c842b725337d08517353f68fc693dc7cd1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 09:04:03.729589 master-0 kubenswrapper[26053]: E0318 09:04:03.729287 26053 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e551e3afc979e135d8e5fbbd918353c842b725337d08517353f68fc693dc7cd1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 09:04:03.753389 master-0 kubenswrapper[26053]: E0318 09:04:03.751547 26053 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e551e3afc979e135d8e5fbbd918353c842b725337d08517353f68fc693dc7cd1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 09:04:03.753389 master-0 kubenswrapper[26053]: E0318 09:04:03.751671 26053 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" podUID="47bfce36-23a9-4523-af40-dfeaaee7b671" containerName="kube-multus-additional-cni-plugins" Mar 18 09:04:03.835958 master-0 kubenswrapper[26053]: I0318 09:04:03.835853 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-66b8ffb895-bfrtz"] Mar 18 09:04:04.645202 master-0 kubenswrapper[26053]: I0318 09:04:04.645098 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-66b8ffb895-bfrtz" event={"ID":"bbedaed5-a2a1-4853-8b60-0baf3d1b143d","Type":"ContainerStarted","Data":"2eb9289120f675878dff553fc73171f1b0db99bf4c77becb7e03924e91f742fa"} Mar 18 09:04:05.091738 master-0 kubenswrapper[26053]: I0318 09:04:05.091138 26053 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 09:04:05.091738 master-0 kubenswrapper[26053]: I0318 09:04:05.091410 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="95378a840215d5780aa88df876aac909" containerName="startup-monitor" containerID="cri-o://c361cbba945001e9baf7ce5c31f92c9a1b2e62ac88d976a094c24336f0593c2e" gracePeriod=5 Mar 18 09:04:07.532919 master-0 kubenswrapper[26053]: I0318 09:04:07.532855 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-68c5849c7c-lqm2r"] Mar 18 09:04:07.533492 master-0 kubenswrapper[26053]: E0318 09:04:07.533154 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95378a840215d5780aa88df876aac909" containerName="startup-monitor" Mar 18 09:04:07.533492 master-0 kubenswrapper[26053]: I0318 09:04:07.533170 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="95378a840215d5780aa88df876aac909" containerName="startup-monitor" Mar 18 09:04:07.533492 master-0 kubenswrapper[26053]: E0318 09:04:07.533191 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2af879e-1465-40bf-bf72-30c7e89386a3" containerName="installer" Mar 18 09:04:07.533492 master-0 kubenswrapper[26053]: I0318 09:04:07.533199 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2af879e-1465-40bf-bf72-30c7e89386a3" containerName="installer" Mar 18 09:04:07.533492 master-0 kubenswrapper[26053]: I0318 09:04:07.533349 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2af879e-1465-40bf-bf72-30c7e89386a3" containerName="installer" Mar 18 09:04:07.533492 master-0 kubenswrapper[26053]: I0318 09:04:07.533391 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="95378a840215d5780aa88df876aac909" containerName="startup-monitor" Mar 18 09:04:07.534440 master-0 kubenswrapper[26053]: I0318 09:04:07.534037 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68c5849c7c-lqm2r" Mar 18 09:04:07.536818 master-0 kubenswrapper[26053]: I0318 09:04:07.535948 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-658wv" Mar 18 09:04:07.538873 master-0 kubenswrapper[26053]: I0318 09:04:07.537002 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 18 09:04:07.538873 master-0 kubenswrapper[26053]: I0318 09:04:07.537060 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 18 09:04:07.538873 master-0 kubenswrapper[26053]: I0318 09:04:07.537803 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 18 09:04:07.538873 master-0 kubenswrapper[26053]: I0318 09:04:07.538060 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 18 09:04:07.538873 master-0 kubenswrapper[26053]: I0318 09:04:07.538315 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 18 09:04:07.572328 master-0 kubenswrapper[26053]: I0318 09:04:07.572278 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-68c5849c7c-lqm2r"] Mar 18 09:04:07.636091 master-0 kubenswrapper[26053]: I0318 09:04:07.636036 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32425206-41b7-427e-8773-f650801d9d76-service-ca\") pod \"console-68c5849c7c-lqm2r\" (UID: \"32425206-41b7-427e-8773-f650801d9d76\") " pod="openshift-console/console-68c5849c7c-lqm2r" Mar 18 09:04:07.636288 master-0 kubenswrapper[26053]: I0318 09:04:07.636117 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32425206-41b7-427e-8773-f650801d9d76-oauth-serving-cert\") pod \"console-68c5849c7c-lqm2r\" (UID: \"32425206-41b7-427e-8773-f650801d9d76\") " pod="openshift-console/console-68c5849c7c-lqm2r" Mar 18 09:04:07.636288 master-0 kubenswrapper[26053]: I0318 09:04:07.636146 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32425206-41b7-427e-8773-f650801d9d76-console-config\") pod \"console-68c5849c7c-lqm2r\" (UID: \"32425206-41b7-427e-8773-f650801d9d76\") " pod="openshift-console/console-68c5849c7c-lqm2r" Mar 18 09:04:07.636288 master-0 kubenswrapper[26053]: I0318 09:04:07.636176 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32425206-41b7-427e-8773-f650801d9d76-console-oauth-config\") pod \"console-68c5849c7c-lqm2r\" (UID: \"32425206-41b7-427e-8773-f650801d9d76\") " pod="openshift-console/console-68c5849c7c-lqm2r" Mar 18 09:04:07.636288 master-0 kubenswrapper[26053]: I0318 09:04:07.636207 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2klpq\" (UniqueName: \"kubernetes.io/projected/32425206-41b7-427e-8773-f650801d9d76-kube-api-access-2klpq\") pod \"console-68c5849c7c-lqm2r\" (UID: \"32425206-41b7-427e-8773-f650801d9d76\") " pod="openshift-console/console-68c5849c7c-lqm2r" Mar 18 09:04:07.636288 master-0 kubenswrapper[26053]: I0318 09:04:07.636225 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32425206-41b7-427e-8773-f650801d9d76-console-serving-cert\") pod \"console-68c5849c7c-lqm2r\" (UID: \"32425206-41b7-427e-8773-f650801d9d76\") " pod="openshift-console/console-68c5849c7c-lqm2r" Mar 18 09:04:07.738077 master-0 kubenswrapper[26053]: I0318 09:04:07.738002 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32425206-41b7-427e-8773-f650801d9d76-service-ca\") pod \"console-68c5849c7c-lqm2r\" (UID: \"32425206-41b7-427e-8773-f650801d9d76\") " pod="openshift-console/console-68c5849c7c-lqm2r" Mar 18 09:04:07.738346 master-0 kubenswrapper[26053]: I0318 09:04:07.738064 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32425206-41b7-427e-8773-f650801d9d76-oauth-serving-cert\") pod \"console-68c5849c7c-lqm2r\" (UID: \"32425206-41b7-427e-8773-f650801d9d76\") " pod="openshift-console/console-68c5849c7c-lqm2r" Mar 18 09:04:07.738346 master-0 kubenswrapper[26053]: I0318 09:04:07.738264 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32425206-41b7-427e-8773-f650801d9d76-console-config\") pod \"console-68c5849c7c-lqm2r\" (UID: \"32425206-41b7-427e-8773-f650801d9d76\") " pod="openshift-console/console-68c5849c7c-lqm2r" Mar 18 09:04:07.738346 master-0 kubenswrapper[26053]: I0318 09:04:07.738308 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32425206-41b7-427e-8773-f650801d9d76-console-oauth-config\") pod \"console-68c5849c7c-lqm2r\" (UID: \"32425206-41b7-427e-8773-f650801d9d76\") " pod="openshift-console/console-68c5849c7c-lqm2r" Mar 18 09:04:07.738742 master-0 kubenswrapper[26053]: I0318 09:04:07.738357 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2klpq\" (UniqueName: \"kubernetes.io/projected/32425206-41b7-427e-8773-f650801d9d76-kube-api-access-2klpq\") pod \"console-68c5849c7c-lqm2r\" (UID: \"32425206-41b7-427e-8773-f650801d9d76\") " pod="openshift-console/console-68c5849c7c-lqm2r" Mar 18 09:04:07.738742 master-0 kubenswrapper[26053]: I0318 09:04:07.738385 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32425206-41b7-427e-8773-f650801d9d76-console-serving-cert\") pod \"console-68c5849c7c-lqm2r\" (UID: \"32425206-41b7-427e-8773-f650801d9d76\") " pod="openshift-console/console-68c5849c7c-lqm2r" Mar 18 09:04:07.739003 master-0 kubenswrapper[26053]: I0318 09:04:07.738953 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32425206-41b7-427e-8773-f650801d9d76-service-ca\") pod \"console-68c5849c7c-lqm2r\" (UID: \"32425206-41b7-427e-8773-f650801d9d76\") " pod="openshift-console/console-68c5849c7c-lqm2r" Mar 18 09:04:07.739347 master-0 kubenswrapper[26053]: I0318 09:04:07.739291 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32425206-41b7-427e-8773-f650801d9d76-console-config\") pod \"console-68c5849c7c-lqm2r\" (UID: \"32425206-41b7-427e-8773-f650801d9d76\") " pod="openshift-console/console-68c5849c7c-lqm2r" Mar 18 09:04:07.739454 master-0 kubenswrapper[26053]: I0318 09:04:07.739331 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32425206-41b7-427e-8773-f650801d9d76-oauth-serving-cert\") pod \"console-68c5849c7c-lqm2r\" (UID: \"32425206-41b7-427e-8773-f650801d9d76\") " pod="openshift-console/console-68c5849c7c-lqm2r" Mar 18 09:04:07.752814 master-0 kubenswrapper[26053]: I0318 09:04:07.752761 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32425206-41b7-427e-8773-f650801d9d76-console-serving-cert\") pod \"console-68c5849c7c-lqm2r\" (UID: \"32425206-41b7-427e-8773-f650801d9d76\") " pod="openshift-console/console-68c5849c7c-lqm2r" Mar 18 09:04:07.755550 master-0 kubenswrapper[26053]: I0318 09:04:07.755463 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32425206-41b7-427e-8773-f650801d9d76-console-oauth-config\") pod \"console-68c5849c7c-lqm2r\" (UID: \"32425206-41b7-427e-8773-f650801d9d76\") " pod="openshift-console/console-68c5849c7c-lqm2r" Mar 18 09:04:07.758574 master-0 kubenswrapper[26053]: I0318 09:04:07.758524 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2klpq\" (UniqueName: \"kubernetes.io/projected/32425206-41b7-427e-8773-f650801d9d76-kube-api-access-2klpq\") pod \"console-68c5849c7c-lqm2r\" (UID: \"32425206-41b7-427e-8773-f650801d9d76\") " pod="openshift-console/console-68c5849c7c-lqm2r" Mar 18 09:04:07.853704 master-0 kubenswrapper[26053]: I0318 09:04:07.853542 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68c5849c7c-lqm2r" Mar 18 09:04:08.348826 master-0 kubenswrapper[26053]: I0318 09:04:08.348769 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-68c5849c7c-lqm2r"] Mar 18 09:04:08.357527 master-0 kubenswrapper[26053]: W0318 09:04:08.357466 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32425206_41b7_427e_8773_f650801d9d76.slice/crio-94154839b8010dc6af1ce4ababf21622beef8733429a4c5de63c874606a0f08f WatchSource:0}: Error finding container 94154839b8010dc6af1ce4ababf21622beef8733429a4c5de63c874606a0f08f: Status 404 returned error can't find the container with id 94154839b8010dc6af1ce4ababf21622beef8733429a4c5de63c874606a0f08f Mar 18 09:04:08.406721 master-0 kubenswrapper[26053]: I0318 09:04:08.406679 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68c5849c7c-lqm2r" event={"ID":"32425206-41b7-427e-8773-f650801d9d76","Type":"ContainerStarted","Data":"94154839b8010dc6af1ce4ababf21622beef8733429a4c5de63c874606a0f08f"} Mar 18 09:04:08.564409 master-0 kubenswrapper[26053]: I0318 09:04:08.564350 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 18 09:04:08.565454 master-0 kubenswrapper[26053]: I0318 09:04:08.565425 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:04:08.567476 master-0 kubenswrapper[26053]: I0318 09:04:08.567443 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-v6rhb" Mar 18 09:04:08.567921 master-0 kubenswrapper[26053]: I0318 09:04:08.567888 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 09:04:08.703053 master-0 kubenswrapper[26053]: I0318 09:04:08.702938 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 18 09:04:08.792919 master-0 kubenswrapper[26053]: I0318 09:04:08.792843 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc-var-lock\") pod \"installer-2-master-0\" (UID: \"c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:04:08.793094 master-0 kubenswrapper[26053]: I0318 09:04:08.792996 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:04:08.793094 master-0 kubenswrapper[26053]: I0318 09:04:08.793061 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc-kube-api-access\") pod \"installer-2-master-0\" (UID: \"c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:04:08.894321 master-0 kubenswrapper[26053]: I0318 09:04:08.894255 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc-var-lock\") pod \"installer-2-master-0\" (UID: \"c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:04:08.894321 master-0 kubenswrapper[26053]: I0318 09:04:08.894314 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:04:08.894690 master-0 kubenswrapper[26053]: I0318 09:04:08.894358 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc-kube-api-access\") pod \"installer-2-master-0\" (UID: \"c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:04:08.894690 master-0 kubenswrapper[26053]: I0318 09:04:08.894422 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc-var-lock\") pod \"installer-2-master-0\" (UID: \"c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:04:08.894690 master-0 kubenswrapper[26053]: I0318 09:04:08.894490 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:04:08.909658 master-0 kubenswrapper[26053]: I0318 09:04:08.909616 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc-kube-api-access\") pod \"installer-2-master-0\" (UID: \"c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:04:09.184519 master-0 kubenswrapper[26053]: I0318 09:04:09.184435 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:04:09.641078 master-0 kubenswrapper[26053]: I0318 09:04:09.641014 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 18 09:04:10.023952 master-0 kubenswrapper[26053]: I0318 09:04:10.023891 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:04:10.027929 master-0 kubenswrapper[26053]: I0318 09:04:10.027897 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-599f97d97f-6zmlx" Mar 18 09:04:10.425168 master-0 kubenswrapper[26053]: I0318 09:04:10.425095 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc","Type":"ContainerStarted","Data":"89b16b843281797e38c6d3fcf436602d5adc8d702ac45b1cef7601a1fd9ffa81"} Mar 18 09:04:10.425168 master-0 kubenswrapper[26053]: I0318 09:04:10.425170 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc","Type":"ContainerStarted","Data":"c16cd65f58b8e23c74dc64601a42b5adc826929b454a13c093edc446b0a72035"} Mar 18 09:04:10.427730 master-0 kubenswrapper[26053]: I0318 09:04:10.427679 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_95378a840215d5780aa88df876aac909/startup-monitor/0.log" Mar 18 09:04:10.427730 master-0 kubenswrapper[26053]: I0318 09:04:10.427737 26053 generic.go:334] "Generic (PLEG): container finished" podID="95378a840215d5780aa88df876aac909" containerID="c361cbba945001e9baf7ce5c31f92c9a1b2e62ac88d976a094c24336f0593c2e" exitCode=137 Mar 18 09:04:10.444696 master-0 kubenswrapper[26053]: I0318 09:04:10.444501 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=2.444473802 podStartE2EDuration="2.444473802s" podCreationTimestamp="2026-03-18 09:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:04:10.443271731 +0000 UTC m=+37.936623132" watchObservedRunningTime="2026-03-18 09:04:10.444473802 +0000 UTC m=+37.937825213" Mar 18 09:04:10.663023 master-0 kubenswrapper[26053]: I0318 09:04:10.662799 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_95378a840215d5780aa88df876aac909/startup-monitor/0.log" Mar 18 09:04:10.663023 master-0 kubenswrapper[26053]: I0318 09:04:10.662866 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:10.738399 master-0 kubenswrapper[26053]: I0318 09:04:10.738173 26053 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 18 09:04:10.755664 master-0 kubenswrapper[26053]: I0318 09:04:10.755619 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 09:04:10.755859 master-0 kubenswrapper[26053]: I0318 09:04:10.755679 26053 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="f4e567a6-3a5e-4c61-8a8b-164c29e02559" Mar 18 09:04:10.758626 master-0 kubenswrapper[26053]: I0318 09:04:10.758591 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 09:04:10.758756 master-0 kubenswrapper[26053]: I0318 09:04:10.758735 26053 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="f4e567a6-3a5e-4c61-8a8b-164c29e02559" Mar 18 09:04:10.825627 master-0 kubenswrapper[26053]: I0318 09:04:10.825515 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-var-log\") pod \"95378a840215d5780aa88df876aac909\" (UID: \"95378a840215d5780aa88df876aac909\") " Mar 18 09:04:10.825627 master-0 kubenswrapper[26053]: I0318 09:04:10.825626 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-manifests\") pod \"95378a840215d5780aa88df876aac909\" (UID: \"95378a840215d5780aa88df876aac909\") " Mar 18 09:04:10.825998 master-0 kubenswrapper[26053]: I0318 09:04:10.825620 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95378a840215d5780aa88df876aac909-var-log" (OuterVolumeSpecName: "var-log") pod "95378a840215d5780aa88df876aac909" (UID: "95378a840215d5780aa88df876aac909"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:10.825998 master-0 kubenswrapper[26053]: I0318 09:04:10.825660 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95378a840215d5780aa88df876aac909-manifests" (OuterVolumeSpecName: "manifests") pod "95378a840215d5780aa88df876aac909" (UID: "95378a840215d5780aa88df876aac909"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:10.825998 master-0 kubenswrapper[26053]: I0318 09:04:10.825756 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-resource-dir\") pod \"95378a840215d5780aa88df876aac909\" (UID: \"95378a840215d5780aa88df876aac909\") " Mar 18 09:04:10.825998 master-0 kubenswrapper[26053]: I0318 09:04:10.825825 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95378a840215d5780aa88df876aac909-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "95378a840215d5780aa88df876aac909" (UID: "95378a840215d5780aa88df876aac909"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:10.826143 master-0 kubenswrapper[26053]: I0318 09:04:10.826008 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-pod-resource-dir\") pod \"95378a840215d5780aa88df876aac909\" (UID: \"95378a840215d5780aa88df876aac909\") " Mar 18 09:04:10.826143 master-0 kubenswrapper[26053]: I0318 09:04:10.826057 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-var-lock\") pod \"95378a840215d5780aa88df876aac909\" (UID: \"95378a840215d5780aa88df876aac909\") " Mar 18 09:04:10.826340 master-0 kubenswrapper[26053]: I0318 09:04:10.826288 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95378a840215d5780aa88df876aac909-var-lock" (OuterVolumeSpecName: "var-lock") pod "95378a840215d5780aa88df876aac909" (UID: "95378a840215d5780aa88df876aac909"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:10.826801 master-0 kubenswrapper[26053]: I0318 09:04:10.826766 26053 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:10.826801 master-0 kubenswrapper[26053]: I0318 09:04:10.826794 26053 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:10.826949 master-0 kubenswrapper[26053]: I0318 09:04:10.826808 26053 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-var-log\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:10.826949 master-0 kubenswrapper[26053]: I0318 09:04:10.826820 26053 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-manifests\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:10.833040 master-0 kubenswrapper[26053]: I0318 09:04:10.832992 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95378a840215d5780aa88df876aac909-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "95378a840215d5780aa88df876aac909" (UID: "95378a840215d5780aa88df876aac909"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:10.928442 master-0 kubenswrapper[26053]: I0318 09:04:10.928374 26053 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/95378a840215d5780aa88df876aac909-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:11.343750 master-0 kubenswrapper[26053]: I0318 09:04:11.343697 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:04:11.343979 master-0 kubenswrapper[26053]: I0318 09:04:11.343880 26053 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 18 09:04:11.363317 master-0 kubenswrapper[26053]: I0318 09:04:11.363260 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6ff5l" Mar 18 09:04:11.446828 master-0 kubenswrapper[26053]: I0318 09:04:11.446780 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_95378a840215d5780aa88df876aac909/startup-monitor/0.log" Mar 18 09:04:11.447264 master-0 kubenswrapper[26053]: I0318 09:04:11.447226 26053 scope.go:117] "RemoveContainer" containerID="c361cbba945001e9baf7ce5c31f92c9a1b2e62ac88d976a094c24336f0593c2e" Mar 18 09:04:11.447742 master-0 kubenswrapper[26053]: I0318 09:04:11.447693 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:04:12.744698 master-0 kubenswrapper[26053]: I0318 09:04:12.744204 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95378a840215d5780aa88df876aac909" path="/var/lib/kubelet/pods/95378a840215d5780aa88df876aac909/volumes" Mar 18 09:04:13.688534 master-0 kubenswrapper[26053]: E0318 09:04:13.687640 26053 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e551e3afc979e135d8e5fbbd918353c842b725337d08517353f68fc693dc7cd1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 09:04:13.690027 master-0 kubenswrapper[26053]: E0318 09:04:13.689269 26053 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e551e3afc979e135d8e5fbbd918353c842b725337d08517353f68fc693dc7cd1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 09:04:13.690605 master-0 kubenswrapper[26053]: E0318 09:04:13.690507 26053 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e551e3afc979e135d8e5fbbd918353c842b725337d08517353f68fc693dc7cd1" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 18 09:04:13.690605 master-0 kubenswrapper[26053]: E0318 09:04:13.690537 26053 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" podUID="47bfce36-23a9-4523-af40-dfeaaee7b671" containerName="kube-multus-additional-cni-plugins" Mar 18 09:04:15.091808 master-0 kubenswrapper[26053]: I0318 09:04:15.091614 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:04:15.091808 master-0 kubenswrapper[26053]: E0318 09:04:15.091825 26053 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 09:04:15.092706 master-0 kubenswrapper[26053]: E0318 09:04:15.091870 26053 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-retry-2-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 09:04:15.092706 master-0 kubenswrapper[26053]: E0318 09:04:15.091926 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access podName:c46fcf39-9167-4ec2-9d2c-0a622bc69d13 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:47.091908832 +0000 UTC m=+74.585260203 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access") pod "installer-1-retry-2-master-0" (UID: "c46fcf39-9167-4ec2-9d2c-0a622bc69d13") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 18 09:04:15.476699 master-0 kubenswrapper[26053]: I0318 09:04:15.476633 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-546755554c-h5vql"] Mar 18 09:04:15.478308 master-0 kubenswrapper[26053]: I0318 09:04:15.478273 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:15.484645 master-0 kubenswrapper[26053]: I0318 09:04:15.484327 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68c5849c7c-lqm2r" event={"ID":"32425206-41b7-427e-8773-f650801d9d76","Type":"ContainerStarted","Data":"c1393693cfa1bf6a1832a3ae800f7a13ec17094ab451357052db3f864a6739ba"} Mar 18 09:04:15.486039 master-0 kubenswrapper[26053]: I0318 09:04:15.485992 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 18 09:04:15.492442 master-0 kubenswrapper[26053]: I0318 09:04:15.492386 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-546755554c-h5vql"] Mar 18 09:04:15.506641 master-0 kubenswrapper[26053]: I0318 09:04:15.505631 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-trusted-ca-bundle\") pod \"console-546755554c-h5vql\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:15.506641 master-0 kubenswrapper[26053]: I0318 09:04:15.505690 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-console-config\") pod \"console-546755554c-h5vql\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:15.506641 master-0 kubenswrapper[26053]: I0318 09:04:15.505758 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-console-oauth-config\") pod \"console-546755554c-h5vql\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:15.506641 master-0 kubenswrapper[26053]: I0318 09:04:15.505831 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-oauth-serving-cert\") pod \"console-546755554c-h5vql\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:15.506641 master-0 kubenswrapper[26053]: I0318 09:04:15.505888 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmgc5\" (UniqueName: \"kubernetes.io/projected/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-kube-api-access-wmgc5\") pod \"console-546755554c-h5vql\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:15.506641 master-0 kubenswrapper[26053]: I0318 09:04:15.505915 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-service-ca\") pod \"console-546755554c-h5vql\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:15.507618 master-0 kubenswrapper[26053]: I0318 09:04:15.507389 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-console-serving-cert\") pod \"console-546755554c-h5vql\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:15.530345 master-0 kubenswrapper[26053]: I0318 09:04:15.530260 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-68c5849c7c-lqm2r" podStartSLOduration=2.161981166 podStartE2EDuration="8.53024047s" podCreationTimestamp="2026-03-18 09:04:07 +0000 UTC" firstStartedPulling="2026-03-18 09:04:08.360431623 +0000 UTC m=+35.853783004" lastFinishedPulling="2026-03-18 09:04:14.728690927 +0000 UTC m=+42.222042308" observedRunningTime="2026-03-18 09:04:15.529938912 +0000 UTC m=+43.023290303" watchObservedRunningTime="2026-03-18 09:04:15.53024047 +0000 UTC m=+43.023591851" Mar 18 09:04:15.613579 master-0 kubenswrapper[26053]: I0318 09:04:15.613476 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-oauth-serving-cert\") pod \"console-546755554c-h5vql\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:15.613579 master-0 kubenswrapper[26053]: I0318 09:04:15.613544 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmgc5\" (UniqueName: \"kubernetes.io/projected/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-kube-api-access-wmgc5\") pod \"console-546755554c-h5vql\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:15.613949 master-0 kubenswrapper[26053]: I0318 09:04:15.613590 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-service-ca\") pod \"console-546755554c-h5vql\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:15.613949 master-0 kubenswrapper[26053]: I0318 09:04:15.613669 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-console-serving-cert\") pod \"console-546755554c-h5vql\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:15.613949 master-0 kubenswrapper[26053]: I0318 09:04:15.613701 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-console-config\") pod \"console-546755554c-h5vql\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:15.613949 master-0 kubenswrapper[26053]: I0318 09:04:15.613720 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-trusted-ca-bundle\") pod \"console-546755554c-h5vql\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:15.613949 master-0 kubenswrapper[26053]: I0318 09:04:15.613754 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-console-oauth-config\") pod \"console-546755554c-h5vql\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:15.616260 master-0 kubenswrapper[26053]: I0318 09:04:15.616219 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-service-ca\") pod \"console-546755554c-h5vql\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:15.617186 master-0 kubenswrapper[26053]: I0318 09:04:15.617037 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-console-config\") pod \"console-546755554c-h5vql\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:15.618114 master-0 kubenswrapper[26053]: I0318 09:04:15.618074 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-console-oauth-config\") pod \"console-546755554c-h5vql\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:15.618458 master-0 kubenswrapper[26053]: I0318 09:04:15.618438 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-oauth-serving-cert\") pod \"console-546755554c-h5vql\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:15.618598 master-0 kubenswrapper[26053]: I0318 09:04:15.618533 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-trusted-ca-bundle\") pod \"console-546755554c-h5vql\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:15.619506 master-0 kubenswrapper[26053]: I0318 09:04:15.619470 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-console-serving-cert\") pod \"console-546755554c-h5vql\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:15.638441 master-0 kubenswrapper[26053]: I0318 09:04:15.638365 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmgc5\" (UniqueName: \"kubernetes.io/projected/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-kube-api-access-wmgc5\") pod \"console-546755554c-h5vql\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:15.811752 master-0 kubenswrapper[26053]: I0318 09:04:15.811650 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:16.404489 master-0 kubenswrapper[26053]: I0318 09:04:16.404362 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-546755554c-h5vql"] Mar 18 09:04:16.410533 master-0 kubenswrapper[26053]: W0318 09:04:16.410496 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dec975e_18dd_4f13_ac8b_56d9fca1c1f7.slice/crio-5c4074685f6a68d304a6c74d54b4b2169802ed0ee9c82f481051d37f2810081f WatchSource:0}: Error finding container 5c4074685f6a68d304a6c74d54b4b2169802ed0ee9c82f481051d37f2810081f: Status 404 returned error can't find the container with id 5c4074685f6a68d304a6c74d54b4b2169802ed0ee9c82f481051d37f2810081f Mar 18 09:04:16.492409 master-0 kubenswrapper[26053]: I0318 09:04:16.492335 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-546755554c-h5vql" event={"ID":"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7","Type":"ContainerStarted","Data":"5c4074685f6a68d304a6c74d54b4b2169802ed0ee9c82f481051d37f2810081f"} Mar 18 09:04:17.621637 master-0 kubenswrapper[26053]: I0318 09:04:17.621547 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-546755554c-h5vql" event={"ID":"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7","Type":"ContainerStarted","Data":"99bd87a4df57cc9b46f4cab72821451bcd3d21345fc4e97784fe91fbcd19f610"} Mar 18 09:04:17.679870 master-0 kubenswrapper[26053]: I0318 09:04:17.678335 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-546755554c-h5vql" podStartSLOduration=2.6783107250000002 podStartE2EDuration="2.678310725s" podCreationTimestamp="2026-03-18 09:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:04:17.674678762 +0000 UTC m=+45.168030153" watchObservedRunningTime="2026-03-18 09:04:17.678310725 +0000 UTC m=+45.171662106" Mar 18 09:04:17.731938 master-0 kubenswrapper[26053]: I0318 09:04:17.731878 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-qkhnq_47bfce36-23a9-4523-af40-dfeaaee7b671/kube-multus-additional-cni-plugins/0.log" Mar 18 09:04:17.732165 master-0 kubenswrapper[26053]: I0318 09:04:17.732008 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" Mar 18 09:04:17.803386 master-0 kubenswrapper[26053]: I0318 09:04:17.803315 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/47bfce36-23a9-4523-af40-dfeaaee7b671-ready\") pod \"47bfce36-23a9-4523-af40-dfeaaee7b671\" (UID: \"47bfce36-23a9-4523-af40-dfeaaee7b671\") " Mar 18 09:04:17.803636 master-0 kubenswrapper[26053]: I0318 09:04:17.803414 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/47bfce36-23a9-4523-af40-dfeaaee7b671-tuning-conf-dir\") pod \"47bfce36-23a9-4523-af40-dfeaaee7b671\" (UID: \"47bfce36-23a9-4523-af40-dfeaaee7b671\") " Mar 18 09:04:17.803636 master-0 kubenswrapper[26053]: I0318 09:04:17.803510 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/47bfce36-23a9-4523-af40-dfeaaee7b671-cni-sysctl-allowlist\") pod \"47bfce36-23a9-4523-af40-dfeaaee7b671\" (UID: \"47bfce36-23a9-4523-af40-dfeaaee7b671\") " Mar 18 09:04:17.803636 master-0 kubenswrapper[26053]: I0318 09:04:17.803553 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4wvk\" (UniqueName: \"kubernetes.io/projected/47bfce36-23a9-4523-af40-dfeaaee7b671-kube-api-access-j4wvk\") pod \"47bfce36-23a9-4523-af40-dfeaaee7b671\" (UID: \"47bfce36-23a9-4523-af40-dfeaaee7b671\") " Mar 18 09:04:17.803777 master-0 kubenswrapper[26053]: I0318 09:04:17.803628 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47bfce36-23a9-4523-af40-dfeaaee7b671-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "47bfce36-23a9-4523-af40-dfeaaee7b671" (UID: "47bfce36-23a9-4523-af40-dfeaaee7b671"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:17.803983 master-0 kubenswrapper[26053]: I0318 09:04:17.803927 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47bfce36-23a9-4523-af40-dfeaaee7b671-ready" (OuterVolumeSpecName: "ready") pod "47bfce36-23a9-4523-af40-dfeaaee7b671" (UID: "47bfce36-23a9-4523-af40-dfeaaee7b671"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 09:04:17.803983 master-0 kubenswrapper[26053]: I0318 09:04:17.803959 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47bfce36-23a9-4523-af40-dfeaaee7b671-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "47bfce36-23a9-4523-af40-dfeaaee7b671" (UID: "47bfce36-23a9-4523-af40-dfeaaee7b671"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:04:17.804361 master-0 kubenswrapper[26053]: I0318 09:04:17.804332 26053 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/47bfce36-23a9-4523-af40-dfeaaee7b671-ready\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:17.804417 master-0 kubenswrapper[26053]: I0318 09:04:17.804362 26053 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/47bfce36-23a9-4523-af40-dfeaaee7b671-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:17.804417 master-0 kubenswrapper[26053]: I0318 09:04:17.804382 26053 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/47bfce36-23a9-4523-af40-dfeaaee7b671-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:17.806925 master-0 kubenswrapper[26053]: I0318 09:04:17.806889 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47bfce36-23a9-4523-af40-dfeaaee7b671-kube-api-access-j4wvk" (OuterVolumeSpecName: "kube-api-access-j4wvk") pod "47bfce36-23a9-4523-af40-dfeaaee7b671" (UID: "47bfce36-23a9-4523-af40-dfeaaee7b671"). InnerVolumeSpecName "kube-api-access-j4wvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:04:17.855185 master-0 kubenswrapper[26053]: I0318 09:04:17.853695 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-68c5849c7c-lqm2r" Mar 18 09:04:17.855185 master-0 kubenswrapper[26053]: I0318 09:04:17.853787 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-68c5849c7c-lqm2r" Mar 18 09:04:17.856698 master-0 kubenswrapper[26053]: I0318 09:04:17.855823 26053 patch_prober.go:28] interesting pod/console-68c5849c7c-lqm2r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.84:8443/health\": dial tcp 10.128.0.84:8443: connect: connection refused" start-of-body= Mar 18 09:04:17.856698 master-0 kubenswrapper[26053]: I0318 09:04:17.855862 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68c5849c7c-lqm2r" podUID="32425206-41b7-427e-8773-f650801d9d76" containerName="console" probeResult="failure" output="Get \"https://10.128.0.84:8443/health\": dial tcp 10.128.0.84:8443: connect: connection refused" Mar 18 09:04:17.906699 master-0 kubenswrapper[26053]: I0318 09:04:17.906644 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4wvk\" (UniqueName: \"kubernetes.io/projected/47bfce36-23a9-4523-af40-dfeaaee7b671-kube-api-access-j4wvk\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:18.630680 master-0 kubenswrapper[26053]: I0318 09:04:18.630635 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-qkhnq_47bfce36-23a9-4523-af40-dfeaaee7b671/kube-multus-additional-cni-plugins/0.log" Mar 18 09:04:18.631174 master-0 kubenswrapper[26053]: I0318 09:04:18.630685 26053 generic.go:334] "Generic (PLEG): container finished" podID="47bfce36-23a9-4523-af40-dfeaaee7b671" containerID="e551e3afc979e135d8e5fbbd918353c842b725337d08517353f68fc693dc7cd1" exitCode=137 Mar 18 09:04:18.631174 master-0 kubenswrapper[26053]: I0318 09:04:18.630744 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" event={"ID":"47bfce36-23a9-4523-af40-dfeaaee7b671","Type":"ContainerDied","Data":"e551e3afc979e135d8e5fbbd918353c842b725337d08517353f68fc693dc7cd1"} Mar 18 09:04:18.631174 master-0 kubenswrapper[26053]: I0318 09:04:18.630786 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" Mar 18 09:04:18.631174 master-0 kubenswrapper[26053]: I0318 09:04:18.630833 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-qkhnq" event={"ID":"47bfce36-23a9-4523-af40-dfeaaee7b671","Type":"ContainerDied","Data":"769f6a7e1832b9edaf00d364768854d8db13038252fd1a1b32d6fa56948828f0"} Mar 18 09:04:18.631174 master-0 kubenswrapper[26053]: I0318 09:04:18.630879 26053 scope.go:117] "RemoveContainer" containerID="e551e3afc979e135d8e5fbbd918353c842b725337d08517353f68fc693dc7cd1" Mar 18 09:04:18.653219 master-0 kubenswrapper[26053]: I0318 09:04:18.651646 26053 scope.go:117] "RemoveContainer" containerID="e551e3afc979e135d8e5fbbd918353c842b725337d08517353f68fc693dc7cd1" Mar 18 09:04:18.653219 master-0 kubenswrapper[26053]: E0318 09:04:18.652033 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e551e3afc979e135d8e5fbbd918353c842b725337d08517353f68fc693dc7cd1\": container with ID starting with e551e3afc979e135d8e5fbbd918353c842b725337d08517353f68fc693dc7cd1 not found: ID does not exist" containerID="e551e3afc979e135d8e5fbbd918353c842b725337d08517353f68fc693dc7cd1" Mar 18 09:04:18.653219 master-0 kubenswrapper[26053]: I0318 09:04:18.652061 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e551e3afc979e135d8e5fbbd918353c842b725337d08517353f68fc693dc7cd1"} err="failed to get container status \"e551e3afc979e135d8e5fbbd918353c842b725337d08517353f68fc693dc7cd1\": rpc error: code = NotFound desc = could not find container \"e551e3afc979e135d8e5fbbd918353c842b725337d08517353f68fc693dc7cd1\": container with ID starting with e551e3afc979e135d8e5fbbd918353c842b725337d08517353f68fc693dc7cd1 not found: ID does not exist" Mar 18 09:04:18.668952 master-0 kubenswrapper[26053]: I0318 09:04:18.668905 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-qkhnq"] Mar 18 09:04:18.691282 master-0 kubenswrapper[26053]: I0318 09:04:18.691225 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-qkhnq"] Mar 18 09:04:18.736339 master-0 kubenswrapper[26053]: I0318 09:04:18.736265 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47bfce36-23a9-4523-af40-dfeaaee7b671" path="/var/lib/kubelet/pods/47bfce36-23a9-4523-af40-dfeaaee7b671/volumes" Mar 18 09:04:21.187028 master-0 kubenswrapper[26053]: I0318 09:04:21.185209 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 18 09:04:21.187028 master-0 kubenswrapper[26053]: I0318 09:04:21.185531 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-2-master-0" podUID="c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc" containerName="installer" containerID="cri-o://89b16b843281797e38c6d3fcf436602d5adc8d702ac45b1cef7601a1fd9ffa81" gracePeriod=30 Mar 18 09:04:23.937410 master-0 kubenswrapper[26053]: I0318 09:04:23.930142 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 18 09:04:23.937410 master-0 kubenswrapper[26053]: E0318 09:04:23.931178 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47bfce36-23a9-4523-af40-dfeaaee7b671" containerName="kube-multus-additional-cni-plugins" Mar 18 09:04:23.937410 master-0 kubenswrapper[26053]: I0318 09:04:23.931207 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="47bfce36-23a9-4523-af40-dfeaaee7b671" containerName="kube-multus-additional-cni-plugins" Mar 18 09:04:23.937410 master-0 kubenswrapper[26053]: I0318 09:04:23.932149 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="47bfce36-23a9-4523-af40-dfeaaee7b671" containerName="kube-multus-additional-cni-plugins" Mar 18 09:04:23.944710 master-0 kubenswrapper[26053]: I0318 09:04:23.939626 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 09:04:24.030437 master-0 kubenswrapper[26053]: I0318 09:04:24.030307 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f663d91b-029e-4abd-9bbe-2d13331b8132-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"f663d91b-029e-4abd-9bbe-2d13331b8132\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 09:04:24.030437 master-0 kubenswrapper[26053]: I0318 09:04:24.030443 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f663d91b-029e-4abd-9bbe-2d13331b8132-kube-api-access\") pod \"installer-3-master-0\" (UID: \"f663d91b-029e-4abd-9bbe-2d13331b8132\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 09:04:24.031067 master-0 kubenswrapper[26053]: I0318 09:04:24.030792 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f663d91b-029e-4abd-9bbe-2d13331b8132-var-lock\") pod \"installer-3-master-0\" (UID: \"f663d91b-029e-4abd-9bbe-2d13331b8132\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 09:04:24.058646 master-0 kubenswrapper[26053]: I0318 09:04:24.057373 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 18 09:04:24.132026 master-0 kubenswrapper[26053]: I0318 09:04:24.131946 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f663d91b-029e-4abd-9bbe-2d13331b8132-kube-api-access\") pod \"installer-3-master-0\" (UID: \"f663d91b-029e-4abd-9bbe-2d13331b8132\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 09:04:24.132026 master-0 kubenswrapper[26053]: I0318 09:04:24.132029 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f663d91b-029e-4abd-9bbe-2d13331b8132-var-lock\") pod \"installer-3-master-0\" (UID: \"f663d91b-029e-4abd-9bbe-2d13331b8132\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 09:04:24.132337 master-0 kubenswrapper[26053]: I0318 09:04:24.132185 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f663d91b-029e-4abd-9bbe-2d13331b8132-var-lock\") pod \"installer-3-master-0\" (UID: \"f663d91b-029e-4abd-9bbe-2d13331b8132\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 09:04:24.132337 master-0 kubenswrapper[26053]: I0318 09:04:24.132197 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f663d91b-029e-4abd-9bbe-2d13331b8132-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"f663d91b-029e-4abd-9bbe-2d13331b8132\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 09:04:24.132443 master-0 kubenswrapper[26053]: I0318 09:04:24.132350 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f663d91b-029e-4abd-9bbe-2d13331b8132-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"f663d91b-029e-4abd-9bbe-2d13331b8132\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 09:04:24.282807 master-0 kubenswrapper[26053]: I0318 09:04:24.281318 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f663d91b-029e-4abd-9bbe-2d13331b8132-kube-api-access\") pod \"installer-3-master-0\" (UID: \"f663d91b-029e-4abd-9bbe-2d13331b8132\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 09:04:24.523768 master-0 kubenswrapper[26053]: I0318 09:04:24.523700 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-25rbq_7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac/multus-admission-controller/0.log" Mar 18 09:04:24.524042 master-0 kubenswrapper[26053]: I0318 09:04:24.523821 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 09:04:24.561637 master-0 kubenswrapper[26053]: I0318 09:04:24.561536 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 09:04:24.663994 master-0 kubenswrapper[26053]: I0318 09:04:24.663873 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77sfj\" (UniqueName: \"kubernetes.io/projected/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-kube-api-access-77sfj\") pod \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " Mar 18 09:04:24.663994 master-0 kubenswrapper[26053]: I0318 09:04:24.663948 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs\") pod \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\" (UID: \"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac\") " Mar 18 09:04:24.671100 master-0 kubenswrapper[26053]: I0318 09:04:24.670701 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac" (UID: "7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:04:24.671559 master-0 kubenswrapper[26053]: I0318 09:04:24.671185 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-kube-api-access-77sfj" (OuterVolumeSpecName: "kube-api-access-77sfj") pod "7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac" (UID: "7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac"). InnerVolumeSpecName "kube-api-access-77sfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:04:24.686540 master-0 kubenswrapper[26053]: I0318 09:04:24.686351 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-25rbq_7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac/multus-admission-controller/0.log" Mar 18 09:04:24.686540 master-0 kubenswrapper[26053]: I0318 09:04:24.686399 26053 generic.go:334] "Generic (PLEG): container finished" podID="7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac" containerID="306577c0e1e759db97ccf74f1b2f6fc8127008236779e60afbca9657add3ab8b" exitCode=137 Mar 18 09:04:24.686540 master-0 kubenswrapper[26053]: I0318 09:04:24.686428 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" event={"ID":"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac","Type":"ContainerDied","Data":"306577c0e1e759db97ccf74f1b2f6fc8127008236779e60afbca9657add3ab8b"} Mar 18 09:04:24.686540 master-0 kubenswrapper[26053]: I0318 09:04:24.686455 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" event={"ID":"7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac","Type":"ContainerDied","Data":"ea6882d5a36974530033e9c50aa841cb8b79a3300b3854f2b7d0678f67c4f1bf"} Mar 18 09:04:24.686540 master-0 kubenswrapper[26053]: I0318 09:04:24.686475 26053 scope.go:117] "RemoveContainer" containerID="47ca95dc496d48ed1be652a66a1b8953731764f76ff2c9029488dd76430f4b5f" Mar 18 09:04:24.686893 master-0 kubenswrapper[26053]: I0318 09:04:24.686626 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq" Mar 18 09:04:24.709088 master-0 kubenswrapper[26053]: I0318 09:04:24.709022 26053 scope.go:117] "RemoveContainer" containerID="306577c0e1e759db97ccf74f1b2f6fc8127008236779e60afbca9657add3ab8b" Mar 18 09:04:24.733251 master-0 kubenswrapper[26053]: I0318 09:04:24.733187 26053 scope.go:117] "RemoveContainer" containerID="47ca95dc496d48ed1be652a66a1b8953731764f76ff2c9029488dd76430f4b5f" Mar 18 09:04:24.733832 master-0 kubenswrapper[26053]: E0318 09:04:24.733789 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47ca95dc496d48ed1be652a66a1b8953731764f76ff2c9029488dd76430f4b5f\": container with ID starting with 47ca95dc496d48ed1be652a66a1b8953731764f76ff2c9029488dd76430f4b5f not found: ID does not exist" containerID="47ca95dc496d48ed1be652a66a1b8953731764f76ff2c9029488dd76430f4b5f" Mar 18 09:04:24.733903 master-0 kubenswrapper[26053]: I0318 09:04:24.733832 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47ca95dc496d48ed1be652a66a1b8953731764f76ff2c9029488dd76430f4b5f"} err="failed to get container status \"47ca95dc496d48ed1be652a66a1b8953731764f76ff2c9029488dd76430f4b5f\": rpc error: code = NotFound desc = could not find container \"47ca95dc496d48ed1be652a66a1b8953731764f76ff2c9029488dd76430f4b5f\": container with ID starting with 47ca95dc496d48ed1be652a66a1b8953731764f76ff2c9029488dd76430f4b5f not found: ID does not exist" Mar 18 09:04:24.733903 master-0 kubenswrapper[26053]: I0318 09:04:24.733859 26053 scope.go:117] "RemoveContainer" containerID="306577c0e1e759db97ccf74f1b2f6fc8127008236779e60afbca9657add3ab8b" Mar 18 09:04:24.734624 master-0 kubenswrapper[26053]: E0318 09:04:24.734406 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"306577c0e1e759db97ccf74f1b2f6fc8127008236779e60afbca9657add3ab8b\": container with ID starting with 306577c0e1e759db97ccf74f1b2f6fc8127008236779e60afbca9657add3ab8b not found: ID does not exist" containerID="306577c0e1e759db97ccf74f1b2f6fc8127008236779e60afbca9657add3ab8b" Mar 18 09:04:24.734624 master-0 kubenswrapper[26053]: I0318 09:04:24.734433 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"306577c0e1e759db97ccf74f1b2f6fc8127008236779e60afbca9657add3ab8b"} err="failed to get container status \"306577c0e1e759db97ccf74f1b2f6fc8127008236779e60afbca9657add3ab8b\": rpc error: code = NotFound desc = could not find container \"306577c0e1e759db97ccf74f1b2f6fc8127008236779e60afbca9657add3ab8b\": container with ID starting with 306577c0e1e759db97ccf74f1b2f6fc8127008236779e60afbca9657add3ab8b not found: ID does not exist" Mar 18 09:04:24.766518 master-0 kubenswrapper[26053]: I0318 09:04:24.766408 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77sfj\" (UniqueName: \"kubernetes.io/projected/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-kube-api-access-77sfj\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:24.766518 master-0 kubenswrapper[26053]: I0318 09:04:24.766457 26053 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac-webhook-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:25.186980 master-0 kubenswrapper[26053]: I0318 09:04:25.186167 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq"] Mar 18 09:04:25.188120 master-0 kubenswrapper[26053]: I0318 09:04:25.188101 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 18 09:04:25.205461 master-0 kubenswrapper[26053]: W0318 09:04:25.205400 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf663d91b_029e_4abd_9bbe_2d13331b8132.slice/crio-8191491493a7d30258ce3bc56a805bcd12eb3f892abb559bf60bc1ed9e13d95c WatchSource:0}: Error finding container 8191491493a7d30258ce3bc56a805bcd12eb3f892abb559bf60bc1ed9e13d95c: Status 404 returned error can't find the container with id 8191491493a7d30258ce3bc56a805bcd12eb3f892abb559bf60bc1ed9e13d95c Mar 18 09:04:25.386410 master-0 kubenswrapper[26053]: I0318 09:04:25.386336 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-25rbq"] Mar 18 09:04:25.699327 master-0 kubenswrapper[26053]: I0318 09:04:25.698789 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"f663d91b-029e-4abd-9bbe-2d13331b8132","Type":"ContainerStarted","Data":"7eee5d5faeab188d73b908e251cdf47f56d757d149fed44ad4e5f00f030b31bb"} Mar 18 09:04:25.699327 master-0 kubenswrapper[26053]: I0318 09:04:25.698852 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"f663d91b-029e-4abd-9bbe-2d13331b8132","Type":"ContainerStarted","Data":"8191491493a7d30258ce3bc56a805bcd12eb3f892abb559bf60bc1ed9e13d95c"} Mar 18 09:04:25.736587 master-0 kubenswrapper[26053]: I0318 09:04:25.732039 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=2.732018393 podStartE2EDuration="2.732018393s" podCreationTimestamp="2026-03-18 09:04:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:04:25.729146909 +0000 UTC m=+53.222498290" watchObservedRunningTime="2026-03-18 09:04:25.732018393 +0000 UTC m=+53.225369774" Mar 18 09:04:25.812691 master-0 kubenswrapper[26053]: I0318 09:04:25.812532 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:25.812691 master-0 kubenswrapper[26053]: I0318 09:04:25.812619 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-546755554c-h5vql" Mar 18 09:04:25.814649 master-0 kubenswrapper[26053]: I0318 09:04:25.814606 26053 patch_prober.go:28] interesting pod/console-546755554c-h5vql container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Mar 18 09:04:25.814759 master-0 kubenswrapper[26053]: I0318 09:04:25.814674 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-546755554c-h5vql" podUID="7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Mar 18 09:04:26.738537 master-0 kubenswrapper[26053]: I0318 09:04:26.738472 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac" path="/var/lib/kubelet/pods/7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac/volumes" Mar 18 09:04:27.855227 master-0 kubenswrapper[26053]: I0318 09:04:27.855152 26053 patch_prober.go:28] interesting pod/console-68c5849c7c-lqm2r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.84:8443/health\": dial tcp 10.128.0.84:8443: connect: connection refused" start-of-body= Mar 18 09:04:27.855959 master-0 kubenswrapper[26053]: I0318 09:04:27.855247 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68c5849c7c-lqm2r" podUID="32425206-41b7-427e-8773-f650801d9d76" containerName="console" probeResult="failure" output="Get \"https://10.128.0.84:8443/health\": dial tcp 10.128.0.84:8443: connect: connection refused" Mar 18 09:04:32.750322 master-0 kubenswrapper[26053]: I0318 09:04:32.750278 26053 scope.go:117] "RemoveContainer" containerID="5a3bd52bc46563d9e0f440951b976daa40dee6ea05c0ee56171ddc976c094e95" Mar 18 09:04:34.173060 master-0 kubenswrapper[26053]: I0318 09:04:34.171293 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 18 09:04:34.173060 master-0 kubenswrapper[26053]: I0318 09:04:34.171483 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-3-master-0" podUID="f663d91b-029e-4abd-9bbe-2d13331b8132" containerName="installer" containerID="cri-o://7eee5d5faeab188d73b908e251cdf47f56d757d149fed44ad4e5f00f030b31bb" gracePeriod=30 Mar 18 09:04:34.209426 master-0 kubenswrapper[26053]: I0318 09:04:34.207479 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-759d994cb6-pm8qx"] Mar 18 09:04:34.209426 master-0 kubenswrapper[26053]: E0318 09:04:34.207957 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac" containerName="kube-rbac-proxy" Mar 18 09:04:34.209426 master-0 kubenswrapper[26053]: I0318 09:04:34.207977 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac" containerName="kube-rbac-proxy" Mar 18 09:04:34.209426 master-0 kubenswrapper[26053]: E0318 09:04:34.208006 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac" containerName="multus-admission-controller" Mar 18 09:04:34.209426 master-0 kubenswrapper[26053]: I0318 09:04:34.208016 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac" containerName="multus-admission-controller" Mar 18 09:04:34.209426 master-0 kubenswrapper[26053]: I0318 09:04:34.208185 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac" containerName="kube-rbac-proxy" Mar 18 09:04:34.209426 master-0 kubenswrapper[26053]: I0318 09:04:34.208219 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e48895e-f8cf-4e62-8b9a-5a50d8a6ccac" containerName="multus-admission-controller" Mar 18 09:04:34.209426 master-0 kubenswrapper[26053]: I0318 09:04:34.208790 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.213313 master-0 kubenswrapper[26053]: I0318 09:04:34.213170 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 18 09:04:34.216669 master-0 kubenswrapper[26053]: I0318 09:04:34.214422 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 18 09:04:34.216669 master-0 kubenswrapper[26053]: I0318 09:04:34.214707 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 18 09:04:34.216669 master-0 kubenswrapper[26053]: I0318 09:04:34.214933 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 18 09:04:34.216669 master-0 kubenswrapper[26053]: I0318 09:04:34.215104 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-gfnn4" Mar 18 09:04:34.216669 master-0 kubenswrapper[26053]: I0318 09:04:34.215298 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 18 09:04:34.216669 master-0 kubenswrapper[26053]: I0318 09:04:34.215445 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 18 09:04:34.220301 master-0 kubenswrapper[26053]: I0318 09:04:34.220183 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 18 09:04:34.223790 master-0 kubenswrapper[26053]: I0318 09:04:34.222333 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 18 09:04:34.223790 master-0 kubenswrapper[26053]: I0318 09:04:34.222453 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 18 09:04:34.223790 master-0 kubenswrapper[26053]: I0318 09:04:34.222548 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 18 09:04:34.223790 master-0 kubenswrapper[26053]: I0318 09:04:34.223092 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 18 09:04:34.232332 master-0 kubenswrapper[26053]: I0318 09:04:34.229264 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 18 09:04:34.240594 master-0 kubenswrapper[26053]: I0318 09:04:34.234602 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-759d994cb6-pm8qx"] Mar 18 09:04:34.250606 master-0 kubenswrapper[26053]: I0318 09:04:34.242550 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 18 09:04:34.375053 master-0 kubenswrapper[26053]: I0318 09:04:34.374883 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns9j2\" (UniqueName: \"kubernetes.io/projected/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-kube-api-access-ns9j2\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.375053 master-0 kubenswrapper[26053]: I0318 09:04:34.374953 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-router-certs\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.375053 master-0 kubenswrapper[26053]: I0318 09:04:34.374988 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-audit-dir\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.375053 master-0 kubenswrapper[26053]: I0318 09:04:34.375024 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-user-template-login\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.375053 master-0 kubenswrapper[26053]: I0318 09:04:34.375049 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.375053 master-0 kubenswrapper[26053]: I0318 09:04:34.375066 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-service-ca\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.375386 master-0 kubenswrapper[26053]: I0318 09:04:34.375087 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-serving-cert\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.375386 master-0 kubenswrapper[26053]: I0318 09:04:34.375114 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-session\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.375386 master-0 kubenswrapper[26053]: I0318 09:04:34.375134 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-audit-policies\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.375386 master-0 kubenswrapper[26053]: I0318 09:04:34.375153 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-cliconfig\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.375386 master-0 kubenswrapper[26053]: I0318 09:04:34.375168 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.375386 master-0 kubenswrapper[26053]: I0318 09:04:34.375185 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.375386 master-0 kubenswrapper[26053]: I0318 09:04:34.375238 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-user-template-error\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.476027 master-0 kubenswrapper[26053]: I0318 09:04:34.475981 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-serving-cert\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.476226 master-0 kubenswrapper[26053]: I0318 09:04:34.476050 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-session\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.476226 master-0 kubenswrapper[26053]: I0318 09:04:34.476086 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-audit-policies\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.476301 master-0 kubenswrapper[26053]: I0318 09:04:34.476226 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-cliconfig\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.476301 master-0 kubenswrapper[26053]: I0318 09:04:34.476256 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.476301 master-0 kubenswrapper[26053]: I0318 09:04:34.476289 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.476411 master-0 kubenswrapper[26053]: I0318 09:04:34.476364 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-user-template-error\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.476411 master-0 kubenswrapper[26053]: I0318 09:04:34.476392 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns9j2\" (UniqueName: \"kubernetes.io/projected/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-kube-api-access-ns9j2\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.476469 master-0 kubenswrapper[26053]: I0318 09:04:34.476413 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-router-certs\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.476469 master-0 kubenswrapper[26053]: I0318 09:04:34.476436 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-audit-dir\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.476469 master-0 kubenswrapper[26053]: I0318 09:04:34.476459 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-user-template-login\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.476558 master-0 kubenswrapper[26053]: I0318 09:04:34.476483 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.476558 master-0 kubenswrapper[26053]: I0318 09:04:34.476498 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-service-ca\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.477127 master-0 kubenswrapper[26053]: E0318 09:04:34.477081 26053 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 18 09:04:34.477195 master-0 kubenswrapper[26053]: E0318 09:04:34.477179 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-cliconfig podName:5ce81927-d5d1-4d4c-99f9-9e0af2a2a997 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:34.977157609 +0000 UTC m=+62.470508990 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-cliconfig") pod "oauth-openshift-759d994cb6-pm8qx" (UID: "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997") : configmap "v4-0-config-system-cliconfig" not found Mar 18 09:04:34.477292 master-0 kubenswrapper[26053]: I0318 09:04:34.477257 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-audit-dir\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.477505 master-0 kubenswrapper[26053]: I0318 09:04:34.477475 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-audit-policies\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.477825 master-0 kubenswrapper[26053]: I0318 09:04:34.477794 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.477895 master-0 kubenswrapper[26053]: I0318 09:04:34.477874 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-service-ca\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.479528 master-0 kubenswrapper[26053]: I0318 09:04:34.479449 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-session\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.480185 master-0 kubenswrapper[26053]: I0318 09:04:34.480145 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-router-certs\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.480362 master-0 kubenswrapper[26053]: I0318 09:04:34.480324 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-user-template-login\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.480924 master-0 kubenswrapper[26053]: I0318 09:04:34.480904 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.482128 master-0 kubenswrapper[26053]: I0318 09:04:34.482104 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.486054 master-0 kubenswrapper[26053]: I0318 09:04:34.486007 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-serving-cert\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.486481 master-0 kubenswrapper[26053]: I0318 09:04:34.486429 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-user-template-error\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.493971 master-0 kubenswrapper[26053]: I0318 09:04:34.493909 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns9j2\" (UniqueName: \"kubernetes.io/projected/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-kube-api-access-ns9j2\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.985328 master-0 kubenswrapper[26053]: I0318 09:04:34.985256 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-cliconfig\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:34.985549 master-0 kubenswrapper[26053]: E0318 09:04:34.985396 26053 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 18 09:04:34.985549 master-0 kubenswrapper[26053]: E0318 09:04:34.985453 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-cliconfig podName:5ce81927-d5d1-4d4c-99f9-9e0af2a2a997 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:35.985437726 +0000 UTC m=+63.478789107 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-cliconfig") pod "oauth-openshift-759d994cb6-pm8qx" (UID: "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997") : configmap "v4-0-config-system-cliconfig" not found Mar 18 09:04:35.814128 master-0 kubenswrapper[26053]: I0318 09:04:35.813987 26053 patch_prober.go:28] interesting pod/console-546755554c-h5vql container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Mar 18 09:04:35.814128 master-0 kubenswrapper[26053]: I0318 09:04:35.814049 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-546755554c-h5vql" podUID="7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Mar 18 09:04:35.962177 master-0 kubenswrapper[26053]: I0318 09:04:35.962114 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 18 09:04:35.963398 master-0 kubenswrapper[26053]: I0318 09:04:35.963368 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:04:35.975850 master-0 kubenswrapper[26053]: I0318 09:04:35.975805 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 18 09:04:36.014355 master-0 kubenswrapper[26053]: I0318 09:04:36.014289 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-cliconfig\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:36.014355 master-0 kubenswrapper[26053]: I0318 09:04:36.014355 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea4b43a1-e9cd-44e4-9c79-55c53146d9e8-var-lock\") pod \"installer-4-master-0\" (UID: \"ea4b43a1-e9cd-44e4-9c79-55c53146d9e8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:04:36.014701 master-0 kubenswrapper[26053]: I0318 09:04:36.014387 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea4b43a1-e9cd-44e4-9c79-55c53146d9e8-kube-api-access\") pod \"installer-4-master-0\" (UID: \"ea4b43a1-e9cd-44e4-9c79-55c53146d9e8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:04:36.014701 master-0 kubenswrapper[26053]: E0318 09:04:36.014410 26053 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 18 09:04:36.014701 master-0 kubenswrapper[26053]: I0318 09:04:36.014483 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea4b43a1-e9cd-44e4-9c79-55c53146d9e8-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"ea4b43a1-e9cd-44e4-9c79-55c53146d9e8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:04:36.014701 master-0 kubenswrapper[26053]: E0318 09:04:36.014500 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-cliconfig podName:5ce81927-d5d1-4d4c-99f9-9e0af2a2a997 nodeName:}" failed. No retries permitted until 2026-03-18 09:04:38.014477742 +0000 UTC m=+65.507829123 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-cliconfig") pod "oauth-openshift-759d994cb6-pm8qx" (UID: "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997") : configmap "v4-0-config-system-cliconfig" not found Mar 18 09:04:36.116207 master-0 kubenswrapper[26053]: I0318 09:04:36.116081 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea4b43a1-e9cd-44e4-9c79-55c53146d9e8-var-lock\") pod \"installer-4-master-0\" (UID: \"ea4b43a1-e9cd-44e4-9c79-55c53146d9e8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:04:36.116396 master-0 kubenswrapper[26053]: I0318 09:04:36.116215 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea4b43a1-e9cd-44e4-9c79-55c53146d9e8-var-lock\") pod \"installer-4-master-0\" (UID: \"ea4b43a1-e9cd-44e4-9c79-55c53146d9e8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:04:36.116396 master-0 kubenswrapper[26053]: I0318 09:04:36.116293 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea4b43a1-e9cd-44e4-9c79-55c53146d9e8-kube-api-access\") pod \"installer-4-master-0\" (UID: \"ea4b43a1-e9cd-44e4-9c79-55c53146d9e8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:04:36.116498 master-0 kubenswrapper[26053]: I0318 09:04:36.116404 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea4b43a1-e9cd-44e4-9c79-55c53146d9e8-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"ea4b43a1-e9cd-44e4-9c79-55c53146d9e8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:04:36.116631 master-0 kubenswrapper[26053]: I0318 09:04:36.116499 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea4b43a1-e9cd-44e4-9c79-55c53146d9e8-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"ea4b43a1-e9cd-44e4-9c79-55c53146d9e8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:04:36.131735 master-0 kubenswrapper[26053]: I0318 09:04:36.131684 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea4b43a1-e9cd-44e4-9c79-55c53146d9e8-kube-api-access\") pod \"installer-4-master-0\" (UID: \"ea4b43a1-e9cd-44e4-9c79-55c53146d9e8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:04:36.285763 master-0 kubenswrapper[26053]: I0318 09:04:36.285713 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:04:37.854857 master-0 kubenswrapper[26053]: I0318 09:04:37.854809 26053 patch_prober.go:28] interesting pod/console-68c5849c7c-lqm2r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.84:8443/health\": dial tcp 10.128.0.84:8443: connect: connection refused" start-of-body= Mar 18 09:04:37.854857 master-0 kubenswrapper[26053]: I0318 09:04:37.854871 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68c5849c7c-lqm2r" podUID="32425206-41b7-427e-8773-f650801d9d76" containerName="console" probeResult="failure" output="Get \"https://10.128.0.84:8443/health\": dial tcp 10.128.0.84:8443: connect: connection refused" Mar 18 09:04:38.053696 master-0 kubenswrapper[26053]: I0318 09:04:38.053629 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-cliconfig\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:38.054504 master-0 kubenswrapper[26053]: I0318 09:04:38.054481 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-cliconfig\") pod \"oauth-openshift-759d994cb6-pm8qx\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:38.208273 master-0 kubenswrapper[26053]: I0318 09:04:38.206929 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:40.335525 master-0 kubenswrapper[26053]: I0318 09:04:40.330884 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-759d994cb6-pm8qx"] Mar 18 09:04:42.165471 master-0 kubenswrapper[26053]: I0318 09:04:42.162766 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 09:04:42.165471 master-0 kubenswrapper[26053]: I0318 09:04:42.163805 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:04:42.166754 master-0 kubenswrapper[26053]: I0318 09:04:42.166384 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-6mb4h" Mar 18 09:04:42.166754 master-0 kubenswrapper[26053]: I0318 09:04:42.166536 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 09:04:42.176752 master-0 kubenswrapper[26053]: I0318 09:04:42.175458 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 09:04:42.219838 master-0 kubenswrapper[26053]: I0318 09:04:42.219766 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1fdd0ad-78f6-479c-9fb0-6e124b2fa537-kube-api-access\") pod \"installer-2-master-0\" (UID: \"c1fdd0ad-78f6-479c-9fb0-6e124b2fa537\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:04:42.220187 master-0 kubenswrapper[26053]: I0318 09:04:42.220092 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1fdd0ad-78f6-479c-9fb0-6e124b2fa537-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"c1fdd0ad-78f6-479c-9fb0-6e124b2fa537\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:04:42.220256 master-0 kubenswrapper[26053]: I0318 09:04:42.220229 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1fdd0ad-78f6-479c-9fb0-6e124b2fa537-var-lock\") pod \"installer-2-master-0\" (UID: \"c1fdd0ad-78f6-479c-9fb0-6e124b2fa537\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:04:42.321147 master-0 kubenswrapper[26053]: I0318 09:04:42.321085 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1fdd0ad-78f6-479c-9fb0-6e124b2fa537-kube-api-access\") pod \"installer-2-master-0\" (UID: \"c1fdd0ad-78f6-479c-9fb0-6e124b2fa537\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:04:42.321354 master-0 kubenswrapper[26053]: I0318 09:04:42.321209 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1fdd0ad-78f6-479c-9fb0-6e124b2fa537-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"c1fdd0ad-78f6-479c-9fb0-6e124b2fa537\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:04:42.321354 master-0 kubenswrapper[26053]: I0318 09:04:42.321237 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1fdd0ad-78f6-479c-9fb0-6e124b2fa537-var-lock\") pod \"installer-2-master-0\" (UID: \"c1fdd0ad-78f6-479c-9fb0-6e124b2fa537\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:04:42.321461 master-0 kubenswrapper[26053]: I0318 09:04:42.321435 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1fdd0ad-78f6-479c-9fb0-6e124b2fa537-var-lock\") pod \"installer-2-master-0\" (UID: \"c1fdd0ad-78f6-479c-9fb0-6e124b2fa537\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:04:42.321521 master-0 kubenswrapper[26053]: I0318 09:04:42.321482 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1fdd0ad-78f6-479c-9fb0-6e124b2fa537-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"c1fdd0ad-78f6-479c-9fb0-6e124b2fa537\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:04:42.342690 master-0 kubenswrapper[26053]: I0318 09:04:42.342646 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1fdd0ad-78f6-479c-9fb0-6e124b2fa537-kube-api-access\") pod \"installer-2-master-0\" (UID: \"c1fdd0ad-78f6-479c-9fb0-6e124b2fa537\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:04:42.486149 master-0 kubenswrapper[26053]: I0318 09:04:42.486072 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:04:45.142419 master-0 kubenswrapper[26053]: I0318 09:04:45.140954 26053 scope.go:117] "RemoveContainer" containerID="c0902a4169e07c094c9a3b99e9ad46a44edb13e670f8fb3c264aac643fba743d" Mar 18 09:04:45.217353 master-0 kubenswrapper[26053]: I0318 09:04:45.217295 26053 scope.go:117] "RemoveContainer" containerID="e66d51cf8147f2ef1dd8f8cd73d79140962d6bcce6a8aaa4c5456711dcd4f71a" Mar 18 09:04:45.594186 master-0 kubenswrapper[26053]: I0318 09:04:45.594141 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc/installer/0.log" Mar 18 09:04:45.594274 master-0 kubenswrapper[26053]: I0318 09:04:45.594223 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:04:45.608966 master-0 kubenswrapper[26053]: I0318 09:04:45.608554 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_f663d91b-029e-4abd-9bbe-2d13331b8132/installer/0.log" Mar 18 09:04:45.608966 master-0 kubenswrapper[26053]: I0318 09:04:45.608643 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 09:04:45.772814 master-0 kubenswrapper[26053]: I0318 09:04:45.772759 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc-kubelet-dir\") pod \"c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc\" (UID: \"c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc\") " Mar 18 09:04:45.773029 master-0 kubenswrapper[26053]: I0318 09:04:45.772856 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f663d91b-029e-4abd-9bbe-2d13331b8132-kube-api-access\") pod \"f663d91b-029e-4abd-9bbe-2d13331b8132\" (UID: \"f663d91b-029e-4abd-9bbe-2d13331b8132\") " Mar 18 09:04:45.773029 master-0 kubenswrapper[26053]: I0318 09:04:45.772910 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc-var-lock\") pod \"c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc\" (UID: \"c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc\") " Mar 18 09:04:45.773029 master-0 kubenswrapper[26053]: I0318 09:04:45.772981 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc-kube-api-access\") pod \"c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc\" (UID: \"c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc\") " Mar 18 09:04:45.773124 master-0 kubenswrapper[26053]: I0318 09:04:45.773049 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f663d91b-029e-4abd-9bbe-2d13331b8132-var-lock\") pod \"f663d91b-029e-4abd-9bbe-2d13331b8132\" (UID: \"f663d91b-029e-4abd-9bbe-2d13331b8132\") " Mar 18 09:04:45.773161 master-0 kubenswrapper[26053]: I0318 09:04:45.773129 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f663d91b-029e-4abd-9bbe-2d13331b8132-kubelet-dir\") pod \"f663d91b-029e-4abd-9bbe-2d13331b8132\" (UID: \"f663d91b-029e-4abd-9bbe-2d13331b8132\") " Mar 18 09:04:45.773536 master-0 kubenswrapper[26053]: I0318 09:04:45.773494 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f663d91b-029e-4abd-9bbe-2d13331b8132-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f663d91b-029e-4abd-9bbe-2d13331b8132" (UID: "f663d91b-029e-4abd-9bbe-2d13331b8132"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:45.773598 master-0 kubenswrapper[26053]: I0318 09:04:45.773546 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc" (UID: "c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:45.774271 master-0 kubenswrapper[26053]: I0318 09:04:45.773767 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f663d91b-029e-4abd-9bbe-2d13331b8132-var-lock" (OuterVolumeSpecName: "var-lock") pod "f663d91b-029e-4abd-9bbe-2d13331b8132" (UID: "f663d91b-029e-4abd-9bbe-2d13331b8132"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:45.774271 master-0 kubenswrapper[26053]: I0318 09:04:45.773802 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc-var-lock" (OuterVolumeSpecName: "var-lock") pod "c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc" (UID: "c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:04:45.777283 master-0 kubenswrapper[26053]: I0318 09:04:45.777240 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc" (UID: "c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:04:45.777724 master-0 kubenswrapper[26053]: I0318 09:04:45.777687 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f663d91b-029e-4abd-9bbe-2d13331b8132-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f663d91b-029e-4abd-9bbe-2d13331b8132" (UID: "f663d91b-029e-4abd-9bbe-2d13331b8132"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:04:45.800962 master-0 kubenswrapper[26053]: I0318 09:04:45.800433 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 09:04:45.809665 master-0 kubenswrapper[26053]: W0318 09:04:45.809121 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc1fdd0ad_78f6_479c_9fb0_6e124b2fa537.slice/crio-f66bbb2f95fbebbbcd99850fb85f7a8c79c7aedd72875d4f2fcdb93864506511 WatchSource:0}: Error finding container f66bbb2f95fbebbbcd99850fb85f7a8c79c7aedd72875d4f2fcdb93864506511: Status 404 returned error can't find the container with id f66bbb2f95fbebbbcd99850fb85f7a8c79c7aedd72875d4f2fcdb93864506511 Mar 18 09:04:45.809665 master-0 kubenswrapper[26053]: I0318 09:04:45.809138 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 18 09:04:45.813398 master-0 kubenswrapper[26053]: I0318 09:04:45.813282 26053 patch_prober.go:28] interesting pod/console-546755554c-h5vql container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Mar 18 09:04:45.813475 master-0 kubenswrapper[26053]: I0318 09:04:45.813379 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-546755554c-h5vql" podUID="7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Mar 18 09:04:45.816922 master-0 kubenswrapper[26053]: I0318 09:04:45.816189 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-759d994cb6-pm8qx"] Mar 18 09:04:45.874664 master-0 kubenswrapper[26053]: I0318 09:04:45.874621 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f663d91b-029e-4abd-9bbe-2d13331b8132-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:45.874664 master-0 kubenswrapper[26053]: I0318 09:04:45.874659 26053 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:45.874664 master-0 kubenswrapper[26053]: I0318 09:04:45.874672 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:45.875035 master-0 kubenswrapper[26053]: I0318 09:04:45.874684 26053 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f663d91b-029e-4abd-9bbe-2d13331b8132-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:45.875035 master-0 kubenswrapper[26053]: I0318 09:04:45.874703 26053 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f663d91b-029e-4abd-9bbe-2d13331b8132-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:45.875035 master-0 kubenswrapper[26053]: I0318 09:04:45.874715 26053 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:45.893670 master-0 kubenswrapper[26053]: I0318 09:04:45.893593 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" event={"ID":"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997","Type":"ContainerStarted","Data":"2354db6e45a35f5e861f879d34fabf5e99fd18d81643742d89ed59540eb046a6"} Mar 18 09:04:45.907472 master-0 kubenswrapper[26053]: I0318 09:04:45.907386 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"ea4b43a1-e9cd-44e4-9c79-55c53146d9e8","Type":"ContainerStarted","Data":"9ee48354727ea7c64e2885f055c6cd978491aac4b6b6159d9bf71084affdae95"} Mar 18 09:04:45.910752 master-0 kubenswrapper[26053]: I0318 09:04:45.910717 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"c1fdd0ad-78f6-479c-9fb0-6e124b2fa537","Type":"ContainerStarted","Data":"f66bbb2f95fbebbbcd99850fb85f7a8c79c7aedd72875d4f2fcdb93864506511"} Mar 18 09:04:45.917192 master-0 kubenswrapper[26053]: I0318 09:04:45.917143 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-66b8ffb895-bfrtz" event={"ID":"bbedaed5-a2a1-4853-8b60-0baf3d1b143d","Type":"ContainerStarted","Data":"f6d03f50a867026b20c68522b229084d9f33286f70ff98906b9145bc756900f2"} Mar 18 09:04:45.918278 master-0 kubenswrapper[26053]: I0318 09:04:45.918259 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-66b8ffb895-bfrtz" Mar 18 09:04:45.920426 master-0 kubenswrapper[26053]: I0318 09:04:45.920386 26053 patch_prober.go:28] interesting pod/downloads-66b8ffb895-bfrtz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.83:8080/\": dial tcp 10.128.0.83:8080: connect: connection refused" start-of-body= Mar 18 09:04:45.920506 master-0 kubenswrapper[26053]: I0318 09:04:45.920449 26053 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-66b8ffb895-bfrtz" podUID="bbedaed5-a2a1-4853-8b60-0baf3d1b143d" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.83:8080/\": dial tcp 10.128.0.83:8080: connect: connection refused" Mar 18 09:04:45.922393 master-0 kubenswrapper[26053]: I0318 09:04:45.921270 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_f663d91b-029e-4abd-9bbe-2d13331b8132/installer/0.log" Mar 18 09:04:45.922393 master-0 kubenswrapper[26053]: I0318 09:04:45.921313 26053 generic.go:334] "Generic (PLEG): container finished" podID="f663d91b-029e-4abd-9bbe-2d13331b8132" containerID="7eee5d5faeab188d73b908e251cdf47f56d757d149fed44ad4e5f00f030b31bb" exitCode=1 Mar 18 09:04:45.922393 master-0 kubenswrapper[26053]: I0318 09:04:45.921394 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 18 09:04:45.922393 master-0 kubenswrapper[26053]: I0318 09:04:45.922205 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"f663d91b-029e-4abd-9bbe-2d13331b8132","Type":"ContainerDied","Data":"7eee5d5faeab188d73b908e251cdf47f56d757d149fed44ad4e5f00f030b31bb"} Mar 18 09:04:45.922393 master-0 kubenswrapper[26053]: I0318 09:04:45.922248 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"f663d91b-029e-4abd-9bbe-2d13331b8132","Type":"ContainerDied","Data":"8191491493a7d30258ce3bc56a805bcd12eb3f892abb559bf60bc1ed9e13d95c"} Mar 18 09:04:45.922393 master-0 kubenswrapper[26053]: I0318 09:04:45.922269 26053 scope.go:117] "RemoveContainer" containerID="7eee5d5faeab188d73b908e251cdf47f56d757d149fed44ad4e5f00f030b31bb" Mar 18 09:04:45.927933 master-0 kubenswrapper[26053]: I0318 09:04:45.927894 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc/installer/0.log" Mar 18 09:04:45.928020 master-0 kubenswrapper[26053]: I0318 09:04:45.927963 26053 generic.go:334] "Generic (PLEG): container finished" podID="c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc" containerID="89b16b843281797e38c6d3fcf436602d5adc8d702ac45b1cef7601a1fd9ffa81" exitCode=1 Mar 18 09:04:45.928020 master-0 kubenswrapper[26053]: I0318 09:04:45.928002 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc","Type":"ContainerDied","Data":"89b16b843281797e38c6d3fcf436602d5adc8d702ac45b1cef7601a1fd9ffa81"} Mar 18 09:04:45.928129 master-0 kubenswrapper[26053]: I0318 09:04:45.928023 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc","Type":"ContainerDied","Data":"c16cd65f58b8e23c74dc64601a42b5adc826929b454a13c093edc446b0a72035"} Mar 18 09:04:45.928129 master-0 kubenswrapper[26053]: I0318 09:04:45.928081 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 18 09:04:45.944057 master-0 kubenswrapper[26053]: I0318 09:04:45.943911 26053 scope.go:117] "RemoveContainer" containerID="7eee5d5faeab188d73b908e251cdf47f56d757d149fed44ad4e5f00f030b31bb" Mar 18 09:04:45.945260 master-0 kubenswrapper[26053]: E0318 09:04:45.945219 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eee5d5faeab188d73b908e251cdf47f56d757d149fed44ad4e5f00f030b31bb\": container with ID starting with 7eee5d5faeab188d73b908e251cdf47f56d757d149fed44ad4e5f00f030b31bb not found: ID does not exist" containerID="7eee5d5faeab188d73b908e251cdf47f56d757d149fed44ad4e5f00f030b31bb" Mar 18 09:04:45.945419 master-0 kubenswrapper[26053]: I0318 09:04:45.945260 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eee5d5faeab188d73b908e251cdf47f56d757d149fed44ad4e5f00f030b31bb"} err="failed to get container status \"7eee5d5faeab188d73b908e251cdf47f56d757d149fed44ad4e5f00f030b31bb\": rpc error: code = NotFound desc = could not find container \"7eee5d5faeab188d73b908e251cdf47f56d757d149fed44ad4e5f00f030b31bb\": container with ID starting with 7eee5d5faeab188d73b908e251cdf47f56d757d149fed44ad4e5f00f030b31bb not found: ID does not exist" Mar 18 09:04:45.945419 master-0 kubenswrapper[26053]: I0318 09:04:45.945285 26053 scope.go:117] "RemoveContainer" containerID="89b16b843281797e38c6d3fcf436602d5adc8d702ac45b1cef7601a1fd9ffa81" Mar 18 09:04:45.966775 master-0 kubenswrapper[26053]: I0318 09:04:45.966709 26053 scope.go:117] "RemoveContainer" containerID="89b16b843281797e38c6d3fcf436602d5adc8d702ac45b1cef7601a1fd9ffa81" Mar 18 09:04:45.967195 master-0 kubenswrapper[26053]: E0318 09:04:45.967170 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89b16b843281797e38c6d3fcf436602d5adc8d702ac45b1cef7601a1fd9ffa81\": container with ID starting with 89b16b843281797e38c6d3fcf436602d5adc8d702ac45b1cef7601a1fd9ffa81 not found: ID does not exist" containerID="89b16b843281797e38c6d3fcf436602d5adc8d702ac45b1cef7601a1fd9ffa81" Mar 18 09:04:45.967246 master-0 kubenswrapper[26053]: I0318 09:04:45.967202 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89b16b843281797e38c6d3fcf436602d5adc8d702ac45b1cef7601a1fd9ffa81"} err="failed to get container status \"89b16b843281797e38c6d3fcf436602d5adc8d702ac45b1cef7601a1fd9ffa81\": rpc error: code = NotFound desc = could not find container \"89b16b843281797e38c6d3fcf436602d5adc8d702ac45b1cef7601a1fd9ffa81\": container with ID starting with 89b16b843281797e38c6d3fcf436602d5adc8d702ac45b1cef7601a1fd9ffa81 not found: ID does not exist" Mar 18 09:04:46.937118 master-0 kubenswrapper[26053]: I0318 09:04:46.937047 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"ea4b43a1-e9cd-44e4-9c79-55c53146d9e8","Type":"ContainerStarted","Data":"486d275d0446fe617ce1f81d234818d3aa4d815534024550c6d720ee5fc67ef9"} Mar 18 09:04:46.939318 master-0 kubenswrapper[26053]: I0318 09:04:46.939258 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"c1fdd0ad-78f6-479c-9fb0-6e124b2fa537","Type":"ContainerStarted","Data":"9c90260e52d989fed2496bdafc089fb2189122dfd653ddbb19c9125dd55f3dd3"} Mar 18 09:04:46.941148 master-0 kubenswrapper[26053]: I0318 09:04:46.941103 26053 patch_prober.go:28] interesting pod/downloads-66b8ffb895-bfrtz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.83:8080/\": dial tcp 10.128.0.83:8080: connect: connection refused" start-of-body= Mar 18 09:04:46.941227 master-0 kubenswrapper[26053]: I0318 09:04:46.941153 26053 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-66b8ffb895-bfrtz" podUID="bbedaed5-a2a1-4853-8b60-0baf3d1b143d" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.83:8080/\": dial tcp 10.128.0.83:8080: connect: connection refused" Mar 18 09:04:47.104028 master-0 kubenswrapper[26053]: I0318 09:04:47.103956 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:04:47.115827 master-0 kubenswrapper[26053]: I0318 09:04:47.113951 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access\") pod \"installer-1-retry-2-master-0\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " pod="openshift-kube-apiserver/installer-1-retry-2-master-0" Mar 18 09:04:47.307186 master-0 kubenswrapper[26053]: I0318 09:04:47.307115 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access\") pod \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\" (UID: \"c46fcf39-9167-4ec2-9d2c-0a622bc69d13\") " Mar 18 09:04:47.311667 master-0 kubenswrapper[26053]: I0318 09:04:47.311605 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c46fcf39-9167-4ec2-9d2c-0a622bc69d13" (UID: "c46fcf39-9167-4ec2-9d2c-0a622bc69d13"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:04:47.408962 master-0 kubenswrapper[26053]: I0318 09:04:47.408904 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c46fcf39-9167-4ec2-9d2c-0a622bc69d13-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:04:47.490062 master-0 kubenswrapper[26053]: I0318 09:04:47.489941 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-66b8ffb895-bfrtz" podStartSLOduration=4.965917866 podStartE2EDuration="46.489910733s" podCreationTimestamp="2026-03-18 09:04:01 +0000 UTC" firstStartedPulling="2026-03-18 09:04:03.84466747 +0000 UTC m=+31.338018851" lastFinishedPulling="2026-03-18 09:04:45.368660337 +0000 UTC m=+72.862011718" observedRunningTime="2026-03-18 09:04:46.757995252 +0000 UTC m=+74.251346643" watchObservedRunningTime="2026-03-18 09:04:47.489910733 +0000 UTC m=+74.983262154" Mar 18 09:04:47.494974 master-0 kubenswrapper[26053]: I0318 09:04:47.494882 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 18 09:04:47.854354 master-0 kubenswrapper[26053]: I0318 09:04:47.854281 26053 patch_prober.go:28] interesting pod/console-68c5849c7c-lqm2r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.84:8443/health\": dial tcp 10.128.0.84:8443: connect: connection refused" start-of-body= Mar 18 09:04:47.854623 master-0 kubenswrapper[26053]: I0318 09:04:47.854394 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68c5849c7c-lqm2r" podUID="32425206-41b7-427e-8773-f650801d9d76" containerName="console" probeResult="failure" output="Get \"https://10.128.0.84:8443/health\": dial tcp 10.128.0.84:8443: connect: connection refused" Mar 18 09:04:47.947399 master-0 kubenswrapper[26053]: I0318 09:04:47.947307 26053 patch_prober.go:28] interesting pod/downloads-66b8ffb895-bfrtz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.83:8080/\": dial tcp 10.128.0.83:8080: connect: connection refused" start-of-body= Mar 18 09:04:47.948038 master-0 kubenswrapper[26053]: I0318 09:04:47.947437 26053 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-66b8ffb895-bfrtz" podUID="bbedaed5-a2a1-4853-8b60-0baf3d1b143d" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.83:8080/\": dial tcp 10.128.0.83:8080: connect: connection refused" Mar 18 09:04:48.026964 master-0 kubenswrapper[26053]: I0318 09:04:48.026883 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 18 09:04:48.739634 master-0 kubenswrapper[26053]: I0318 09:04:48.739587 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc" path="/var/lib/kubelet/pods/c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc/volumes" Mar 18 09:04:49.605245 master-0 kubenswrapper[26053]: I0318 09:04:49.605178 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:04:49.899103 master-0 kubenswrapper[26053]: I0318 09:04:49.898428 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 18 09:04:50.516615 master-0 kubenswrapper[26053]: I0318 09:04:50.515895 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 18 09:04:50.742939 master-0 kubenswrapper[26053]: I0318 09:04:50.741779 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f663d91b-029e-4abd-9bbe-2d13331b8132" path="/var/lib/kubelet/pods/f663d91b-029e-4abd-9bbe-2d13331b8132/volumes" Mar 18 09:04:51.374444 master-0 kubenswrapper[26053]: I0318 09:04:51.374337 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=16.374308642 podStartE2EDuration="16.374308642s" podCreationTimestamp="2026-03-18 09:04:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:04:51.371497069 +0000 UTC m=+78.864848460" watchObservedRunningTime="2026-03-18 09:04:51.374308642 +0000 UTC m=+78.867660063" Mar 18 09:04:53.052340 master-0 kubenswrapper[26053]: I0318 09:04:53.052253 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=11.05223234 podStartE2EDuration="11.05223234s" podCreationTimestamp="2026-03-18 09:04:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:04:53.051749108 +0000 UTC m=+80.545100489" watchObservedRunningTime="2026-03-18 09:04:53.05223234 +0000 UTC m=+80.545583721" Mar 18 09:04:53.409591 master-0 kubenswrapper[26053]: I0318 09:04:53.409401 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-66b8ffb895-bfrtz" Mar 18 09:04:54.005118 master-0 kubenswrapper[26053]: I0318 09:04:54.005009 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" event={"ID":"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997","Type":"ContainerStarted","Data":"5ec4bce84348d89e4858afd2a0515b719238cabe72000d120ceed47151955b37"} Mar 18 09:04:54.005393 master-0 kubenswrapper[26053]: I0318 09:04:54.005368 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:54.008471 master-0 kubenswrapper[26053]: I0318 09:04:54.008434 26053 patch_prober.go:28] interesting pod/oauth-openshift-759d994cb6-pm8qx container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.128.0.88:6443/healthz\": dial tcp 10.128.0.88:6443: connect: connection refused" start-of-body= Mar 18 09:04:54.008538 master-0 kubenswrapper[26053]: I0318 09:04:54.008475 26053 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" podUID="5ce81927-d5d1-4d4c-99f9-9e0af2a2a997" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.128.0.88:6443/healthz\": dial tcp 10.128.0.88:6443: connect: connection refused" Mar 18 09:04:55.813816 master-0 kubenswrapper[26053]: I0318 09:04:55.813720 26053 patch_prober.go:28] interesting pod/console-546755554c-h5vql container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Mar 18 09:04:55.814316 master-0 kubenswrapper[26053]: I0318 09:04:55.813840 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-546755554c-h5vql" podUID="7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Mar 18 09:04:57.864288 master-0 kubenswrapper[26053]: I0318 09:04:57.864165 26053 patch_prober.go:28] interesting pod/console-68c5849c7c-lqm2r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.84:8443/health\": dial tcp 10.128.0.84:8443: connect: connection refused" start-of-body= Mar 18 09:04:57.864288 master-0 kubenswrapper[26053]: I0318 09:04:57.864284 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68c5849c7c-lqm2r" podUID="32425206-41b7-427e-8773-f650801d9d76" containerName="console" probeResult="failure" output="Get \"https://10.128.0.84:8443/health\": dial tcp 10.128.0.84:8443: connect: connection refused" Mar 18 09:04:58.215810 master-0 kubenswrapper[26053]: I0318 09:04:58.215350 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:04:58.547425 master-0 kubenswrapper[26053]: I0318 09:04:58.547246 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" podStartSLOduration=16.755192188 podStartE2EDuration="24.547223992s" podCreationTimestamp="2026-03-18 09:04:34 +0000 UTC" firstStartedPulling="2026-03-18 09:04:45.832234254 +0000 UTC m=+73.325585635" lastFinishedPulling="2026-03-18 09:04:53.624266058 +0000 UTC m=+81.117617439" observedRunningTime="2026-03-18 09:04:54.027335898 +0000 UTC m=+81.520687279" watchObservedRunningTime="2026-03-18 09:04:58.547223992 +0000 UTC m=+86.040575383" Mar 18 09:05:00.438995 master-0 kubenswrapper[26053]: I0318 09:05:00.438922 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 09:05:00.439909 master-0 kubenswrapper[26053]: I0318 09:05:00.439246 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-2-master-0" podUID="c1fdd0ad-78f6-479c-9fb0-6e124b2fa537" containerName="installer" containerID="cri-o://9c90260e52d989fed2496bdafc089fb2189122dfd653ddbb19c9125dd55f3dd3" gracePeriod=30 Mar 18 09:05:05.159809 master-0 kubenswrapper[26053]: I0318 09:05:05.159723 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 18 09:05:05.160522 master-0 kubenswrapper[26053]: E0318 09:05:05.160121 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f663d91b-029e-4abd-9bbe-2d13331b8132" containerName="installer" Mar 18 09:05:05.160522 master-0 kubenswrapper[26053]: I0318 09:05:05.160142 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="f663d91b-029e-4abd-9bbe-2d13331b8132" containerName="installer" Mar 18 09:05:05.160522 master-0 kubenswrapper[26053]: E0318 09:05:05.160192 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc" containerName="installer" Mar 18 09:05:05.160522 master-0 kubenswrapper[26053]: I0318 09:05:05.160208 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc" containerName="installer" Mar 18 09:05:05.160522 master-0 kubenswrapper[26053]: I0318 09:05:05.160403 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7d3ed3c-5124-4494-b3e2-be5dfaafc4bc" containerName="installer" Mar 18 09:05:05.160522 master-0 kubenswrapper[26053]: I0318 09:05:05.160440 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="f663d91b-029e-4abd-9bbe-2d13331b8132" containerName="installer" Mar 18 09:05:05.161258 master-0 kubenswrapper[26053]: I0318 09:05:05.161225 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:05:05.182614 master-0 kubenswrapper[26053]: I0318 09:05:05.179762 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 18 09:05:05.237235 master-0 kubenswrapper[26053]: I0318 09:05:05.237109 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c2cac96-c192-47dd-9c3a-6dc58a165084-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4c2cac96-c192-47dd-9c3a-6dc58a165084\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:05:05.237235 master-0 kubenswrapper[26053]: I0318 09:05:05.237247 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4c2cac96-c192-47dd-9c3a-6dc58a165084-var-lock\") pod \"installer-3-master-0\" (UID: \"4c2cac96-c192-47dd-9c3a-6dc58a165084\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:05:05.237685 master-0 kubenswrapper[26053]: I0318 09:05:05.237327 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c2cac96-c192-47dd-9c3a-6dc58a165084-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"4c2cac96-c192-47dd-9c3a-6dc58a165084\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:05:05.339496 master-0 kubenswrapper[26053]: I0318 09:05:05.339420 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c2cac96-c192-47dd-9c3a-6dc58a165084-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"4c2cac96-c192-47dd-9c3a-6dc58a165084\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:05:05.339784 master-0 kubenswrapper[26053]: I0318 09:05:05.339546 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c2cac96-c192-47dd-9c3a-6dc58a165084-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4c2cac96-c192-47dd-9c3a-6dc58a165084\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:05:05.339784 master-0 kubenswrapper[26053]: I0318 09:05:05.339711 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c2cac96-c192-47dd-9c3a-6dc58a165084-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"4c2cac96-c192-47dd-9c3a-6dc58a165084\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:05:05.339979 master-0 kubenswrapper[26053]: I0318 09:05:05.339897 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4c2cac96-c192-47dd-9c3a-6dc58a165084-var-lock\") pod \"installer-3-master-0\" (UID: \"4c2cac96-c192-47dd-9c3a-6dc58a165084\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:05:05.340020 master-0 kubenswrapper[26053]: I0318 09:05:05.340004 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4c2cac96-c192-47dd-9c3a-6dc58a165084-var-lock\") pod \"installer-3-master-0\" (UID: \"4c2cac96-c192-47dd-9c3a-6dc58a165084\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:05:05.373678 master-0 kubenswrapper[26053]: I0318 09:05:05.373613 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c2cac96-c192-47dd-9c3a-6dc58a165084-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4c2cac96-c192-47dd-9c3a-6dc58a165084\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:05:05.500315 master-0 kubenswrapper[26053]: I0318 09:05:05.500215 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:05:05.813990 master-0 kubenswrapper[26053]: I0318 09:05:05.813751 26053 patch_prober.go:28] interesting pod/console-546755554c-h5vql container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Mar 18 09:05:05.813990 master-0 kubenswrapper[26053]: I0318 09:05:05.813927 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-546755554c-h5vql" podUID="7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Mar 18 09:05:05.997453 master-0 kubenswrapper[26053]: I0318 09:05:05.997356 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 18 09:05:06.007923 master-0 kubenswrapper[26053]: W0318 09:05:06.007843 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4c2cac96_c192_47dd_9c3a_6dc58a165084.slice/crio-6a1272c41fa1218103cafdda6a224eecf47d8f9487ecbbc736f479d5ef62ea1d WatchSource:0}: Error finding container 6a1272c41fa1218103cafdda6a224eecf47d8f9487ecbbc736f479d5ef62ea1d: Status 404 returned error can't find the container with id 6a1272c41fa1218103cafdda6a224eecf47d8f9487ecbbc736f479d5ef62ea1d Mar 18 09:05:06.136111 master-0 kubenswrapper[26053]: I0318 09:05:06.135931 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"4c2cac96-c192-47dd-9c3a-6dc58a165084","Type":"ContainerStarted","Data":"6a1272c41fa1218103cafdda6a224eecf47d8f9487ecbbc736f479d5ef62ea1d"} Mar 18 09:05:07.148791 master-0 kubenswrapper[26053]: I0318 09:05:07.148622 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"4c2cac96-c192-47dd-9c3a-6dc58a165084","Type":"ContainerStarted","Data":"fcc86bc36370a7bbe86a9bc2aedad52fd6ea53360f20071df19fbd25a1f58504"} Mar 18 09:05:07.176337 master-0 kubenswrapper[26053]: I0318 09:05:07.176238 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=2.176212771 podStartE2EDuration="2.176212771s" podCreationTimestamp="2026-03-18 09:05:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:05:07.174501518 +0000 UTC m=+94.667852919" watchObservedRunningTime="2026-03-18 09:05:07.176212771 +0000 UTC m=+94.669564172" Mar 18 09:05:07.855198 master-0 kubenswrapper[26053]: I0318 09:05:07.855112 26053 patch_prober.go:28] interesting pod/console-68c5849c7c-lqm2r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.84:8443/health\": dial tcp 10.128.0.84:8443: connect: connection refused" start-of-body= Mar 18 09:05:07.855198 master-0 kubenswrapper[26053]: I0318 09:05:07.855187 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68c5849c7c-lqm2r" podUID="32425206-41b7-427e-8773-f650801d9d76" containerName="console" probeResult="failure" output="Get \"https://10.128.0.84:8443/health\": dial tcp 10.128.0.84:8443: connect: connection refused" Mar 18 09:05:09.206596 master-0 kubenswrapper[26053]: I0318 09:05:09.196729 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d954fcfb-gpddv"] Mar 18 09:05:09.206596 master-0 kubenswrapper[26053]: I0318 09:05:09.197122 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" podUID="6e869b45-8ca6-485f-8b6f-b2fad3b02efe" containerName="controller-manager" containerID="cri-o://e32072042363277b3f0d68f8ae8924dc1c01bef38c146caf77cf061b8b73cd26" gracePeriod=30 Mar 18 09:05:09.236142 master-0 kubenswrapper[26053]: I0318 09:05:09.234991 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg"] Mar 18 09:05:09.236142 master-0 kubenswrapper[26053]: I0318 09:05:09.235472 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" podUID="7b7ac7ef-060f-45d2-8988-006d45402e00" containerName="route-controller-manager" containerID="cri-o://cad52e5c76ccd56c40f70728b53ed3629a65d0787f739d643129ab2545080603" gracePeriod=30 Mar 18 09:05:09.910276 master-0 kubenswrapper[26053]: I0318 09:05:09.910173 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 09:05:10.016804 master-0 kubenswrapper[26053]: I0318 09:05:10.016759 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 09:05:10.020964 master-0 kubenswrapper[26053]: I0318 09:05:10.020896 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b7ac7ef-060f-45d2-8988-006d45402e00-config\") pod \"7b7ac7ef-060f-45d2-8988-006d45402e00\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " Mar 18 09:05:10.021082 master-0 kubenswrapper[26053]: I0318 09:05:10.021015 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkx4s\" (UniqueName: \"kubernetes.io/projected/7b7ac7ef-060f-45d2-8988-006d45402e00-kube-api-access-qkx4s\") pod \"7b7ac7ef-060f-45d2-8988-006d45402e00\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " Mar 18 09:05:10.021487 master-0 kubenswrapper[26053]: I0318 09:05:10.021439 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b7ac7ef-060f-45d2-8988-006d45402e00-config" (OuterVolumeSpecName: "config") pod "7b7ac7ef-060f-45d2-8988-006d45402e00" (UID: "7b7ac7ef-060f-45d2-8988-006d45402e00"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:05:10.021589 master-0 kubenswrapper[26053]: I0318 09:05:10.021539 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b7ac7ef-060f-45d2-8988-006d45402e00-client-ca\") pod \"7b7ac7ef-060f-45d2-8988-006d45402e00\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " Mar 18 09:05:10.021589 master-0 kubenswrapper[26053]: I0318 09:05:10.021588 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b7ac7ef-060f-45d2-8988-006d45402e00-serving-cert\") pod \"7b7ac7ef-060f-45d2-8988-006d45402e00\" (UID: \"7b7ac7ef-060f-45d2-8988-006d45402e00\") " Mar 18 09:05:10.022062 master-0 kubenswrapper[26053]: I0318 09:05:10.022016 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b7ac7ef-060f-45d2-8988-006d45402e00-client-ca" (OuterVolumeSpecName: "client-ca") pod "7b7ac7ef-060f-45d2-8988-006d45402e00" (UID: "7b7ac7ef-060f-45d2-8988-006d45402e00"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:05:10.022182 master-0 kubenswrapper[26053]: I0318 09:05:10.022153 26053 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b7ac7ef-060f-45d2-8988-006d45402e00-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:10.022182 master-0 kubenswrapper[26053]: I0318 09:05:10.022175 26053 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7b7ac7ef-060f-45d2-8988-006d45402e00-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:10.037590 master-0 kubenswrapper[26053]: I0318 09:05:10.033045 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b7ac7ef-060f-45d2-8988-006d45402e00-kube-api-access-qkx4s" (OuterVolumeSpecName: "kube-api-access-qkx4s") pod "7b7ac7ef-060f-45d2-8988-006d45402e00" (UID: "7b7ac7ef-060f-45d2-8988-006d45402e00"). InnerVolumeSpecName "kube-api-access-qkx4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:05:10.055587 master-0 kubenswrapper[26053]: I0318 09:05:10.052240 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b7ac7ef-060f-45d2-8988-006d45402e00-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7b7ac7ef-060f-45d2-8988-006d45402e00" (UID: "7b7ac7ef-060f-45d2-8988-006d45402e00"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:05:10.124041 master-0 kubenswrapper[26053]: I0318 09:05:10.122982 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-client-ca\") pod \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " Mar 18 09:05:10.124041 master-0 kubenswrapper[26053]: I0318 09:05:10.123039 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-serving-cert\") pod \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " Mar 18 09:05:10.124041 master-0 kubenswrapper[26053]: I0318 09:05:10.123192 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-config\") pod \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " Mar 18 09:05:10.124041 master-0 kubenswrapper[26053]: I0318 09:05:10.123257 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjv4l\" (UniqueName: \"kubernetes.io/projected/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-kube-api-access-xjv4l\") pod \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " Mar 18 09:05:10.124041 master-0 kubenswrapper[26053]: I0318 09:05:10.123300 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-proxy-ca-bundles\") pod \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\" (UID: \"6e869b45-8ca6-485f-8b6f-b2fad3b02efe\") " Mar 18 09:05:10.124041 master-0 kubenswrapper[26053]: I0318 09:05:10.123660 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkx4s\" (UniqueName: \"kubernetes.io/projected/7b7ac7ef-060f-45d2-8988-006d45402e00-kube-api-access-qkx4s\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:10.124041 master-0 kubenswrapper[26053]: I0318 09:05:10.123688 26053 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7b7ac7ef-060f-45d2-8988-006d45402e00-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:10.124041 master-0 kubenswrapper[26053]: I0318 09:05:10.123974 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-client-ca" (OuterVolumeSpecName: "client-ca") pod "6e869b45-8ca6-485f-8b6f-b2fad3b02efe" (UID: "6e869b45-8ca6-485f-8b6f-b2fad3b02efe"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:05:10.124041 master-0 kubenswrapper[26053]: I0318 09:05:10.124030 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6e869b45-8ca6-485f-8b6f-b2fad3b02efe" (UID: "6e869b45-8ca6-485f-8b6f-b2fad3b02efe"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:05:10.124549 master-0 kubenswrapper[26053]: I0318 09:05:10.124179 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-config" (OuterVolumeSpecName: "config") pod "6e869b45-8ca6-485f-8b6f-b2fad3b02efe" (UID: "6e869b45-8ca6-485f-8b6f-b2fad3b02efe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:05:10.127333 master-0 kubenswrapper[26053]: I0318 09:05:10.125481 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6e869b45-8ca6-485f-8b6f-b2fad3b02efe" (UID: "6e869b45-8ca6-485f-8b6f-b2fad3b02efe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:05:10.127333 master-0 kubenswrapper[26053]: I0318 09:05:10.125658 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-kube-api-access-xjv4l" (OuterVolumeSpecName: "kube-api-access-xjv4l") pod "6e869b45-8ca6-485f-8b6f-b2fad3b02efe" (UID: "6e869b45-8ca6-485f-8b6f-b2fad3b02efe"). InnerVolumeSpecName "kube-api-access-xjv4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:05:10.173238 master-0 kubenswrapper[26053]: I0318 09:05:10.173109 26053 generic.go:334] "Generic (PLEG): container finished" podID="7b7ac7ef-060f-45d2-8988-006d45402e00" containerID="cad52e5c76ccd56c40f70728b53ed3629a65d0787f739d643129ab2545080603" exitCode=0 Mar 18 09:05:10.173238 master-0 kubenswrapper[26053]: I0318 09:05:10.173187 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" Mar 18 09:05:10.173433 master-0 kubenswrapper[26053]: I0318 09:05:10.173192 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" event={"ID":"7b7ac7ef-060f-45d2-8988-006d45402e00","Type":"ContainerDied","Data":"cad52e5c76ccd56c40f70728b53ed3629a65d0787f739d643129ab2545080603"} Mar 18 09:05:10.173433 master-0 kubenswrapper[26053]: I0318 09:05:10.173310 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg" event={"ID":"7b7ac7ef-060f-45d2-8988-006d45402e00","Type":"ContainerDied","Data":"78e813f78215ce3e16f2984fa206660096f7ce143773316d81ca9ea51d037b30"} Mar 18 09:05:10.173433 master-0 kubenswrapper[26053]: I0318 09:05:10.173332 26053 scope.go:117] "RemoveContainer" containerID="cad52e5c76ccd56c40f70728b53ed3629a65d0787f739d643129ab2545080603" Mar 18 09:05:10.177263 master-0 kubenswrapper[26053]: I0318 09:05:10.177231 26053 generic.go:334] "Generic (PLEG): container finished" podID="6e869b45-8ca6-485f-8b6f-b2fad3b02efe" containerID="e32072042363277b3f0d68f8ae8924dc1c01bef38c146caf77cf061b8b73cd26" exitCode=0 Mar 18 09:05:10.177316 master-0 kubenswrapper[26053]: I0318 09:05:10.177270 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" event={"ID":"6e869b45-8ca6-485f-8b6f-b2fad3b02efe","Type":"ContainerDied","Data":"e32072042363277b3f0d68f8ae8924dc1c01bef38c146caf77cf061b8b73cd26"} Mar 18 09:05:10.177316 master-0 kubenswrapper[26053]: I0318 09:05:10.177294 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" event={"ID":"6e869b45-8ca6-485f-8b6f-b2fad3b02efe","Type":"ContainerDied","Data":"34190ff24c5d64d3f04ee73c9371b2fe699e4dc756931f93643f7e454d205294"} Mar 18 09:05:10.177380 master-0 kubenswrapper[26053]: I0318 09:05:10.177343 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d954fcfb-gpddv" Mar 18 09:05:10.191821 master-0 kubenswrapper[26053]: I0318 09:05:10.191785 26053 scope.go:117] "RemoveContainer" containerID="cad52e5c76ccd56c40f70728b53ed3629a65d0787f739d643129ab2545080603" Mar 18 09:05:10.192258 master-0 kubenswrapper[26053]: E0318 09:05:10.192225 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cad52e5c76ccd56c40f70728b53ed3629a65d0787f739d643129ab2545080603\": container with ID starting with cad52e5c76ccd56c40f70728b53ed3629a65d0787f739d643129ab2545080603 not found: ID does not exist" containerID="cad52e5c76ccd56c40f70728b53ed3629a65d0787f739d643129ab2545080603" Mar 18 09:05:10.192309 master-0 kubenswrapper[26053]: I0318 09:05:10.192261 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cad52e5c76ccd56c40f70728b53ed3629a65d0787f739d643129ab2545080603"} err="failed to get container status \"cad52e5c76ccd56c40f70728b53ed3629a65d0787f739d643129ab2545080603\": rpc error: code = NotFound desc = could not find container \"cad52e5c76ccd56c40f70728b53ed3629a65d0787f739d643129ab2545080603\": container with ID starting with cad52e5c76ccd56c40f70728b53ed3629a65d0787f739d643129ab2545080603 not found: ID does not exist" Mar 18 09:05:10.192309 master-0 kubenswrapper[26053]: I0318 09:05:10.192295 26053 scope.go:117] "RemoveContainer" containerID="e32072042363277b3f0d68f8ae8924dc1c01bef38c146caf77cf061b8b73cd26" Mar 18 09:05:10.208704 master-0 kubenswrapper[26053]: I0318 09:05:10.208110 26053 scope.go:117] "RemoveContainer" containerID="c6f50cc1d4d03038974b1dda9ac09a73d320c83ee3b7473d607c5ae6d14d9d74" Mar 18 09:05:10.210809 master-0 kubenswrapper[26053]: I0318 09:05:10.210774 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7d954fcfb-gpddv"] Mar 18 09:05:10.214366 master-0 kubenswrapper[26053]: I0318 09:05:10.214340 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7d954fcfb-gpddv"] Mar 18 09:05:10.222827 master-0 kubenswrapper[26053]: I0318 09:05:10.222791 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg"] Mar 18 09:05:10.224158 master-0 kubenswrapper[26053]: I0318 09:05:10.223712 26053 scope.go:117] "RemoveContainer" containerID="e32072042363277b3f0d68f8ae8924dc1c01bef38c146caf77cf061b8b73cd26" Mar 18 09:05:10.225775 master-0 kubenswrapper[26053]: I0318 09:05:10.225748 26053 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:10.225775 master-0 kubenswrapper[26053]: I0318 09:05:10.225772 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjv4l\" (UniqueName: \"kubernetes.io/projected/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-kube-api-access-xjv4l\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:10.225873 master-0 kubenswrapper[26053]: I0318 09:05:10.225783 26053 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:10.225873 master-0 kubenswrapper[26053]: I0318 09:05:10.225794 26053 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:10.225873 master-0 kubenswrapper[26053]: I0318 09:05:10.225802 26053 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e869b45-8ca6-485f-8b6f-b2fad3b02efe-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:10.225988 master-0 kubenswrapper[26053]: E0318 09:05:10.225952 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e32072042363277b3f0d68f8ae8924dc1c01bef38c146caf77cf061b8b73cd26\": container with ID starting with e32072042363277b3f0d68f8ae8924dc1c01bef38c146caf77cf061b8b73cd26 not found: ID does not exist" containerID="e32072042363277b3f0d68f8ae8924dc1c01bef38c146caf77cf061b8b73cd26" Mar 18 09:05:10.226033 master-0 kubenswrapper[26053]: I0318 09:05:10.225999 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e32072042363277b3f0d68f8ae8924dc1c01bef38c146caf77cf061b8b73cd26"} err="failed to get container status \"e32072042363277b3f0d68f8ae8924dc1c01bef38c146caf77cf061b8b73cd26\": rpc error: code = NotFound desc = could not find container \"e32072042363277b3f0d68f8ae8924dc1c01bef38c146caf77cf061b8b73cd26\": container with ID starting with e32072042363277b3f0d68f8ae8924dc1c01bef38c146caf77cf061b8b73cd26 not found: ID does not exist" Mar 18 09:05:10.226033 master-0 kubenswrapper[26053]: I0318 09:05:10.226028 26053 scope.go:117] "RemoveContainer" containerID="c6f50cc1d4d03038974b1dda9ac09a73d320c83ee3b7473d607c5ae6d14d9d74" Mar 18 09:05:10.226350 master-0 kubenswrapper[26053]: E0318 09:05:10.226323 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6f50cc1d4d03038974b1dda9ac09a73d320c83ee3b7473d607c5ae6d14d9d74\": container with ID starting with c6f50cc1d4d03038974b1dda9ac09a73d320c83ee3b7473d607c5ae6d14d9d74 not found: ID does not exist" containerID="c6f50cc1d4d03038974b1dda9ac09a73d320c83ee3b7473d607c5ae6d14d9d74" Mar 18 09:05:10.226389 master-0 kubenswrapper[26053]: I0318 09:05:10.226351 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6f50cc1d4d03038974b1dda9ac09a73d320c83ee3b7473d607c5ae6d14d9d74"} err="failed to get container status \"c6f50cc1d4d03038974b1dda9ac09a73d320c83ee3b7473d607c5ae6d14d9d74\": rpc error: code = NotFound desc = could not find container \"c6f50cc1d4d03038974b1dda9ac09a73d320c83ee3b7473d607c5ae6d14d9d74\": container with ID starting with c6f50cc1d4d03038974b1dda9ac09a73d320c83ee3b7473d607c5ae6d14d9d74 not found: ID does not exist" Mar 18 09:05:10.226948 master-0 kubenswrapper[26053]: I0318 09:05:10.226924 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dbcb47f86-ptccg"] Mar 18 09:05:10.406141 master-0 kubenswrapper[26053]: I0318 09:05:10.406053 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm"] Mar 18 09:05:10.406470 master-0 kubenswrapper[26053]: E0318 09:05:10.406439 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e869b45-8ca6-485f-8b6f-b2fad3b02efe" containerName="controller-manager" Mar 18 09:05:10.406532 master-0 kubenswrapper[26053]: I0318 09:05:10.406460 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e869b45-8ca6-485f-8b6f-b2fad3b02efe" containerName="controller-manager" Mar 18 09:05:10.406532 master-0 kubenswrapper[26053]: E0318 09:05:10.406502 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e869b45-8ca6-485f-8b6f-b2fad3b02efe" containerName="controller-manager" Mar 18 09:05:10.406532 master-0 kubenswrapper[26053]: I0318 09:05:10.406513 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e869b45-8ca6-485f-8b6f-b2fad3b02efe" containerName="controller-manager" Mar 18 09:05:10.406694 master-0 kubenswrapper[26053]: E0318 09:05:10.406586 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7ac7ef-060f-45d2-8988-006d45402e00" containerName="route-controller-manager" Mar 18 09:05:10.406694 master-0 kubenswrapper[26053]: I0318 09:05:10.406598 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7ac7ef-060f-45d2-8988-006d45402e00" containerName="route-controller-manager" Mar 18 09:05:10.406806 master-0 kubenswrapper[26053]: I0318 09:05:10.406787 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e869b45-8ca6-485f-8b6f-b2fad3b02efe" containerName="controller-manager" Mar 18 09:05:10.406883 master-0 kubenswrapper[26053]: I0318 09:05:10.406867 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b7ac7ef-060f-45d2-8988-006d45402e00" containerName="route-controller-manager" Mar 18 09:05:10.407422 master-0 kubenswrapper[26053]: I0318 09:05:10.407389 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" Mar 18 09:05:10.409336 master-0 kubenswrapper[26053]: I0318 09:05:10.409299 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 09:05:10.410237 master-0 kubenswrapper[26053]: I0318 09:05:10.410205 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 09:05:10.410517 master-0 kubenswrapper[26053]: I0318 09:05:10.410486 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-rwvl6" Mar 18 09:05:10.410771 master-0 kubenswrapper[26053]: I0318 09:05:10.410733 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 09:05:10.410920 master-0 kubenswrapper[26053]: I0318 09:05:10.410888 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 09:05:10.413666 master-0 kubenswrapper[26053]: I0318 09:05:10.413636 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s"] Mar 18 09:05:10.414025 master-0 kubenswrapper[26053]: I0318 09:05:10.414004 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e869b45-8ca6-485f-8b6f-b2fad3b02efe" containerName="controller-manager" Mar 18 09:05:10.414429 master-0 kubenswrapper[26053]: I0318 09:05:10.414403 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s" Mar 18 09:05:10.419237 master-0 kubenswrapper[26053]: I0318 09:05:10.419187 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 09:05:10.421181 master-0 kubenswrapper[26053]: I0318 09:05:10.421128 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm"] Mar 18 09:05:10.427624 master-0 kubenswrapper[26053]: I0318 09:05:10.427587 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 09:05:10.427689 master-0 kubenswrapper[26053]: I0318 09:05:10.427636 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-8zgz4" Mar 18 09:05:10.428465 master-0 kubenswrapper[26053]: I0318 09:05:10.428429 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 09:05:10.429654 master-0 kubenswrapper[26053]: I0318 09:05:10.428664 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 09:05:10.429654 master-0 kubenswrapper[26053]: I0318 09:05:10.428682 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 09:05:10.429654 master-0 kubenswrapper[26053]: I0318 09:05:10.428822 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 09:05:10.429654 master-0 kubenswrapper[26053]: I0318 09:05:10.428956 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 09:05:10.433415 master-0 kubenswrapper[26053]: I0318 09:05:10.433390 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s"] Mar 18 09:05:10.529902 master-0 kubenswrapper[26053]: I0318 09:05:10.529847 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94b229a5-7840-46fe-a221-85093a4f4a72-client-ca\") pod \"controller-manager-b6fbdfb5-hxtkm\" (UID: \"94b229a5-7840-46fe-a221-85093a4f4a72\") " pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" Mar 18 09:05:10.530320 master-0 kubenswrapper[26053]: I0318 09:05:10.530286 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94b229a5-7840-46fe-a221-85093a4f4a72-serving-cert\") pod \"controller-manager-b6fbdfb5-hxtkm\" (UID: \"94b229a5-7840-46fe-a221-85093a4f4a72\") " pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" Mar 18 09:05:10.530510 master-0 kubenswrapper[26053]: I0318 09:05:10.530482 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh5gx\" (UniqueName: \"kubernetes.io/projected/94b229a5-7840-46fe-a221-85093a4f4a72-kube-api-access-zh5gx\") pod \"controller-manager-b6fbdfb5-hxtkm\" (UID: \"94b229a5-7840-46fe-a221-85093a4f4a72\") " pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" Mar 18 09:05:10.530835 master-0 kubenswrapper[26053]: I0318 09:05:10.530805 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/94b229a5-7840-46fe-a221-85093a4f4a72-proxy-ca-bundles\") pod \"controller-manager-b6fbdfb5-hxtkm\" (UID: \"94b229a5-7840-46fe-a221-85093a4f4a72\") " pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" Mar 18 09:05:10.531040 master-0 kubenswrapper[26053]: I0318 09:05:10.531012 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d382cea-1da2-48b9-b151-36438d83ee30-client-ca\") pod \"route-controller-manager-59885f85db-7xg2s\" (UID: \"7d382cea-1da2-48b9-b151-36438d83ee30\") " pod="openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s" Mar 18 09:05:10.531247 master-0 kubenswrapper[26053]: I0318 09:05:10.531218 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtdxv\" (UniqueName: \"kubernetes.io/projected/7d382cea-1da2-48b9-b151-36438d83ee30-kube-api-access-vtdxv\") pod \"route-controller-manager-59885f85db-7xg2s\" (UID: \"7d382cea-1da2-48b9-b151-36438d83ee30\") " pod="openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s" Mar 18 09:05:10.531469 master-0 kubenswrapper[26053]: I0318 09:05:10.531439 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d382cea-1da2-48b9-b151-36438d83ee30-serving-cert\") pod \"route-controller-manager-59885f85db-7xg2s\" (UID: \"7d382cea-1da2-48b9-b151-36438d83ee30\") " pod="openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s" Mar 18 09:05:10.531683 master-0 kubenswrapper[26053]: I0318 09:05:10.531656 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d382cea-1da2-48b9-b151-36438d83ee30-config\") pod \"route-controller-manager-59885f85db-7xg2s\" (UID: \"7d382cea-1da2-48b9-b151-36438d83ee30\") " pod="openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s" Mar 18 09:05:10.531895 master-0 kubenswrapper[26053]: I0318 09:05:10.531867 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94b229a5-7840-46fe-a221-85093a4f4a72-config\") pod \"controller-manager-b6fbdfb5-hxtkm\" (UID: \"94b229a5-7840-46fe-a221-85093a4f4a72\") " pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" Mar 18 09:05:10.632991 master-0 kubenswrapper[26053]: I0318 09:05:10.632903 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94b229a5-7840-46fe-a221-85093a4f4a72-client-ca\") pod \"controller-manager-b6fbdfb5-hxtkm\" (UID: \"94b229a5-7840-46fe-a221-85093a4f4a72\") " pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" Mar 18 09:05:10.633191 master-0 kubenswrapper[26053]: I0318 09:05:10.633025 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94b229a5-7840-46fe-a221-85093a4f4a72-serving-cert\") pod \"controller-manager-b6fbdfb5-hxtkm\" (UID: \"94b229a5-7840-46fe-a221-85093a4f4a72\") " pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" Mar 18 09:05:10.633191 master-0 kubenswrapper[26053]: I0318 09:05:10.633057 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh5gx\" (UniqueName: \"kubernetes.io/projected/94b229a5-7840-46fe-a221-85093a4f4a72-kube-api-access-zh5gx\") pod \"controller-manager-b6fbdfb5-hxtkm\" (UID: \"94b229a5-7840-46fe-a221-85093a4f4a72\") " pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" Mar 18 09:05:10.633191 master-0 kubenswrapper[26053]: I0318 09:05:10.633083 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/94b229a5-7840-46fe-a221-85093a4f4a72-proxy-ca-bundles\") pod \"controller-manager-b6fbdfb5-hxtkm\" (UID: \"94b229a5-7840-46fe-a221-85093a4f4a72\") " pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" Mar 18 09:05:10.633191 master-0 kubenswrapper[26053]: I0318 09:05:10.633109 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d382cea-1da2-48b9-b151-36438d83ee30-client-ca\") pod \"route-controller-manager-59885f85db-7xg2s\" (UID: \"7d382cea-1da2-48b9-b151-36438d83ee30\") " pod="openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s" Mar 18 09:05:10.633191 master-0 kubenswrapper[26053]: I0318 09:05:10.633152 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtdxv\" (UniqueName: \"kubernetes.io/projected/7d382cea-1da2-48b9-b151-36438d83ee30-kube-api-access-vtdxv\") pod \"route-controller-manager-59885f85db-7xg2s\" (UID: \"7d382cea-1da2-48b9-b151-36438d83ee30\") " pod="openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s" Mar 18 09:05:10.633334 master-0 kubenswrapper[26053]: I0318 09:05:10.633203 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d382cea-1da2-48b9-b151-36438d83ee30-serving-cert\") pod \"route-controller-manager-59885f85db-7xg2s\" (UID: \"7d382cea-1da2-48b9-b151-36438d83ee30\") " pod="openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s" Mar 18 09:05:10.633334 master-0 kubenswrapper[26053]: I0318 09:05:10.633231 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d382cea-1da2-48b9-b151-36438d83ee30-config\") pod \"route-controller-manager-59885f85db-7xg2s\" (UID: \"7d382cea-1da2-48b9-b151-36438d83ee30\") " pod="openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s" Mar 18 09:05:10.633334 master-0 kubenswrapper[26053]: I0318 09:05:10.633264 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94b229a5-7840-46fe-a221-85093a4f4a72-config\") pod \"controller-manager-b6fbdfb5-hxtkm\" (UID: \"94b229a5-7840-46fe-a221-85093a4f4a72\") " pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" Mar 18 09:05:10.634490 master-0 kubenswrapper[26053]: I0318 09:05:10.634463 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/94b229a5-7840-46fe-a221-85093a4f4a72-proxy-ca-bundles\") pod \"controller-manager-b6fbdfb5-hxtkm\" (UID: \"94b229a5-7840-46fe-a221-85093a4f4a72\") " pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" Mar 18 09:05:10.634747 master-0 kubenswrapper[26053]: I0318 09:05:10.634712 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d382cea-1da2-48b9-b151-36438d83ee30-client-ca\") pod \"route-controller-manager-59885f85db-7xg2s\" (UID: \"7d382cea-1da2-48b9-b151-36438d83ee30\") " pod="openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s" Mar 18 09:05:10.635662 master-0 kubenswrapper[26053]: I0318 09:05:10.634943 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94b229a5-7840-46fe-a221-85093a4f4a72-config\") pod \"controller-manager-b6fbdfb5-hxtkm\" (UID: \"94b229a5-7840-46fe-a221-85093a4f4a72\") " pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" Mar 18 09:05:10.635662 master-0 kubenswrapper[26053]: I0318 09:05:10.635344 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94b229a5-7840-46fe-a221-85093a4f4a72-client-ca\") pod \"controller-manager-b6fbdfb5-hxtkm\" (UID: \"94b229a5-7840-46fe-a221-85093a4f4a72\") " pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" Mar 18 09:05:10.635662 master-0 kubenswrapper[26053]: I0318 09:05:10.635585 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d382cea-1da2-48b9-b151-36438d83ee30-config\") pod \"route-controller-manager-59885f85db-7xg2s\" (UID: \"7d382cea-1da2-48b9-b151-36438d83ee30\") " pod="openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s" Mar 18 09:05:10.638228 master-0 kubenswrapper[26053]: I0318 09:05:10.638204 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d382cea-1da2-48b9-b151-36438d83ee30-serving-cert\") pod \"route-controller-manager-59885f85db-7xg2s\" (UID: \"7d382cea-1da2-48b9-b151-36438d83ee30\") " pod="openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s" Mar 18 09:05:10.638447 master-0 kubenswrapper[26053]: I0318 09:05:10.638418 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94b229a5-7840-46fe-a221-85093a4f4a72-serving-cert\") pod \"controller-manager-b6fbdfb5-hxtkm\" (UID: \"94b229a5-7840-46fe-a221-85093a4f4a72\") " pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" Mar 18 09:05:10.656017 master-0 kubenswrapper[26053]: I0318 09:05:10.655968 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh5gx\" (UniqueName: \"kubernetes.io/projected/94b229a5-7840-46fe-a221-85093a4f4a72-kube-api-access-zh5gx\") pod \"controller-manager-b6fbdfb5-hxtkm\" (UID: \"94b229a5-7840-46fe-a221-85093a4f4a72\") " pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" Mar 18 09:05:10.657140 master-0 kubenswrapper[26053]: I0318 09:05:10.657101 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtdxv\" (UniqueName: \"kubernetes.io/projected/7d382cea-1da2-48b9-b151-36438d83ee30-kube-api-access-vtdxv\") pod \"route-controller-manager-59885f85db-7xg2s\" (UID: \"7d382cea-1da2-48b9-b151-36438d83ee30\") " pod="openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s" Mar 18 09:05:10.735619 master-0 kubenswrapper[26053]: I0318 09:05:10.735562 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" Mar 18 09:05:10.743359 master-0 kubenswrapper[26053]: I0318 09:05:10.743326 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e869b45-8ca6-485f-8b6f-b2fad3b02efe" path="/var/lib/kubelet/pods/6e869b45-8ca6-485f-8b6f-b2fad3b02efe/volumes" Mar 18 09:05:10.744007 master-0 kubenswrapper[26053]: I0318 09:05:10.743982 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b7ac7ef-060f-45d2-8988-006d45402e00" path="/var/lib/kubelet/pods/7b7ac7ef-060f-45d2-8988-006d45402e00/volumes" Mar 18 09:05:10.749047 master-0 kubenswrapper[26053]: I0318 09:05:10.749003 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s" Mar 18 09:05:11.175678 master-0 kubenswrapper[26053]: I0318 09:05:11.175621 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm"] Mar 18 09:05:11.181508 master-0 kubenswrapper[26053]: W0318 09:05:11.181459 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94b229a5_7840_46fe_a221_85093a4f4a72.slice/crio-ad0f96131c788e721ab29fe5861a0fee1e64255ac2bda2b065b890a5b75ebf53 WatchSource:0}: Error finding container ad0f96131c788e721ab29fe5861a0fee1e64255ac2bda2b065b890a5b75ebf53: Status 404 returned error can't find the container with id ad0f96131c788e721ab29fe5861a0fee1e64255ac2bda2b065b890a5b75ebf53 Mar 18 09:05:11.241990 master-0 kubenswrapper[26053]: I0318 09:05:11.241916 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s"] Mar 18 09:05:11.245851 master-0 kubenswrapper[26053]: W0318 09:05:11.245802 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d382cea_1da2_48b9_b151_36438d83ee30.slice/crio-5c3362c5f580c6e27737d82422403afb52a8181ef4365cb5c8c417d64f1c3da8 WatchSource:0}: Error finding container 5c3362c5f580c6e27737d82422403afb52a8181ef4365cb5c8c417d64f1c3da8: Status 404 returned error can't find the container with id 5c3362c5f580c6e27737d82422403afb52a8181ef4365cb5c8c417d64f1c3da8 Mar 18 09:05:12.204555 master-0 kubenswrapper[26053]: I0318 09:05:12.204422 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s" event={"ID":"7d382cea-1da2-48b9-b151-36438d83ee30","Type":"ContainerStarted","Data":"4dfbd1e6b0f4cf80a2a0317bc36c4bd133c8b496d75c7a3bc95daeb4ecc9574c"} Mar 18 09:05:12.204828 master-0 kubenswrapper[26053]: I0318 09:05:12.204556 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s" event={"ID":"7d382cea-1da2-48b9-b151-36438d83ee30","Type":"ContainerStarted","Data":"5c3362c5f580c6e27737d82422403afb52a8181ef4365cb5c8c417d64f1c3da8"} Mar 18 09:05:12.204828 master-0 kubenswrapper[26053]: I0318 09:05:12.204669 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s" Mar 18 09:05:12.207238 master-0 kubenswrapper[26053]: I0318 09:05:12.207113 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" event={"ID":"94b229a5-7840-46fe-a221-85093a4f4a72","Type":"ContainerStarted","Data":"581f350221bcc5babe731715938c7496901dd4d2837a12ccf73e4cdd96278feb"} Mar 18 09:05:12.207510 master-0 kubenswrapper[26053]: I0318 09:05:12.207472 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" event={"ID":"94b229a5-7840-46fe-a221-85093a4f4a72","Type":"ContainerStarted","Data":"ad0f96131c788e721ab29fe5861a0fee1e64255ac2bda2b065b890a5b75ebf53"} Mar 18 09:05:12.207616 master-0 kubenswrapper[26053]: I0318 09:05:12.207524 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" Mar 18 09:05:12.212472 master-0 kubenswrapper[26053]: I0318 09:05:12.212398 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" Mar 18 09:05:12.224172 master-0 kubenswrapper[26053]: I0318 09:05:12.224100 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s" Mar 18 09:05:12.242379 master-0 kubenswrapper[26053]: I0318 09:05:12.242293 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s" podStartSLOduration=3.242272359 podStartE2EDuration="3.242272359s" podCreationTimestamp="2026-03-18 09:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:05:12.240303329 +0000 UTC m=+99.733654730" watchObservedRunningTime="2026-03-18 09:05:12.242272359 +0000 UTC m=+99.735623750" Mar 18 09:05:12.265066 master-0 kubenswrapper[26053]: I0318 09:05:12.264944 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" podStartSLOduration=3.264909276 podStartE2EDuration="3.264909276s" podCreationTimestamp="2026-03-18 09:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:05:12.261644853 +0000 UTC m=+99.754996244" watchObservedRunningTime="2026-03-18 09:05:12.264909276 +0000 UTC m=+99.758260727" Mar 18 09:05:15.812677 master-0 kubenswrapper[26053]: I0318 09:05:15.812633 26053 patch_prober.go:28] interesting pod/console-546755554c-h5vql container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Mar 18 09:05:15.813384 master-0 kubenswrapper[26053]: I0318 09:05:15.813341 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-546755554c-h5vql" podUID="7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Mar 18 09:05:17.854282 master-0 kubenswrapper[26053]: I0318 09:05:17.854201 26053 patch_prober.go:28] interesting pod/console-68c5849c7c-lqm2r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.84:8443/health\": dial tcp 10.128.0.84:8443: connect: connection refused" start-of-body= Mar 18 09:05:17.854925 master-0 kubenswrapper[26053]: I0318 09:05:17.854313 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68c5849c7c-lqm2r" podUID="32425206-41b7-427e-8773-f650801d9d76" containerName="console" probeResult="failure" output="Get \"https://10.128.0.84:8443/health\": dial tcp 10.128.0.84:8443: connect: connection refused" Mar 18 09:05:18.254397 master-0 kubenswrapper[26053]: I0318 09:05:18.254352 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_c1fdd0ad-78f6-479c-9fb0-6e124b2fa537/installer/0.log" Mar 18 09:05:18.254397 master-0 kubenswrapper[26053]: I0318 09:05:18.254398 26053 generic.go:334] "Generic (PLEG): container finished" podID="c1fdd0ad-78f6-479c-9fb0-6e124b2fa537" containerID="9c90260e52d989fed2496bdafc089fb2189122dfd653ddbb19c9125dd55f3dd3" exitCode=1 Mar 18 09:05:18.255083 master-0 kubenswrapper[26053]: I0318 09:05:18.254628 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"c1fdd0ad-78f6-479c-9fb0-6e124b2fa537","Type":"ContainerDied","Data":"9c90260e52d989fed2496bdafc089fb2189122dfd653ddbb19c9125dd55f3dd3"} Mar 18 09:05:18.392868 master-0 kubenswrapper[26053]: I0318 09:05:18.392783 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_c1fdd0ad-78f6-479c-9fb0-6e124b2fa537/installer/0.log" Mar 18 09:05:18.392868 master-0 kubenswrapper[26053]: I0318 09:05:18.392847 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:05:18.464250 master-0 kubenswrapper[26053]: I0318 09:05:18.464173 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1fdd0ad-78f6-479c-9fb0-6e124b2fa537-kube-api-access\") pod \"c1fdd0ad-78f6-479c-9fb0-6e124b2fa537\" (UID: \"c1fdd0ad-78f6-479c-9fb0-6e124b2fa537\") " Mar 18 09:05:18.464535 master-0 kubenswrapper[26053]: I0318 09:05:18.464323 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1fdd0ad-78f6-479c-9fb0-6e124b2fa537-kubelet-dir\") pod \"c1fdd0ad-78f6-479c-9fb0-6e124b2fa537\" (UID: \"c1fdd0ad-78f6-479c-9fb0-6e124b2fa537\") " Mar 18 09:05:18.464535 master-0 kubenswrapper[26053]: I0318 09:05:18.464404 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1fdd0ad-78f6-479c-9fb0-6e124b2fa537-var-lock\") pod \"c1fdd0ad-78f6-479c-9fb0-6e124b2fa537\" (UID: \"c1fdd0ad-78f6-479c-9fb0-6e124b2fa537\") " Mar 18 09:05:18.464869 master-0 kubenswrapper[26053]: I0318 09:05:18.464824 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1fdd0ad-78f6-479c-9fb0-6e124b2fa537-var-lock" (OuterVolumeSpecName: "var-lock") pod "c1fdd0ad-78f6-479c-9fb0-6e124b2fa537" (UID: "c1fdd0ad-78f6-479c-9fb0-6e124b2fa537"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:05:18.465090 master-0 kubenswrapper[26053]: I0318 09:05:18.465032 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c1fdd0ad-78f6-479c-9fb0-6e124b2fa537-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c1fdd0ad-78f6-479c-9fb0-6e124b2fa537" (UID: "c1fdd0ad-78f6-479c-9fb0-6e124b2fa537"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:05:18.468982 master-0 kubenswrapper[26053]: I0318 09:05:18.468890 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1fdd0ad-78f6-479c-9fb0-6e124b2fa537-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c1fdd0ad-78f6-479c-9fb0-6e124b2fa537" (UID: "c1fdd0ad-78f6-479c-9fb0-6e124b2fa537"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:05:18.566427 master-0 kubenswrapper[26053]: I0318 09:05:18.566374 26053 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1fdd0ad-78f6-479c-9fb0-6e124b2fa537-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:18.566732 master-0 kubenswrapper[26053]: I0318 09:05:18.566716 26053 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c1fdd0ad-78f6-479c-9fb0-6e124b2fa537-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:18.566832 master-0 kubenswrapper[26053]: I0318 09:05:18.566819 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c1fdd0ad-78f6-479c-9fb0-6e124b2fa537-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:19.035823 master-0 kubenswrapper[26053]: I0318 09:05:19.035534 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" podUID="5ce81927-d5d1-4d4c-99f9-9e0af2a2a997" containerName="oauth-openshift" containerID="cri-o://5ec4bce84348d89e4858afd2a0515b719238cabe72000d120ceed47151955b37" gracePeriod=15 Mar 18 09:05:19.265045 master-0 kubenswrapper[26053]: I0318 09:05:19.264993 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_c1fdd0ad-78f6-479c-9fb0-6e124b2fa537/installer/0.log" Mar 18 09:05:19.265275 master-0 kubenswrapper[26053]: I0318 09:05:19.265071 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"c1fdd0ad-78f6-479c-9fb0-6e124b2fa537","Type":"ContainerDied","Data":"f66bbb2f95fbebbbcd99850fb85f7a8c79c7aedd72875d4f2fcdb93864506511"} Mar 18 09:05:19.265275 master-0 kubenswrapper[26053]: I0318 09:05:19.265110 26053 scope.go:117] "RemoveContainer" containerID="9c90260e52d989fed2496bdafc089fb2189122dfd653ddbb19c9125dd55f3dd3" Mar 18 09:05:19.265275 master-0 kubenswrapper[26053]: I0318 09:05:19.265204 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 18 09:05:19.270503 master-0 kubenswrapper[26053]: I0318 09:05:19.269684 26053 generic.go:334] "Generic (PLEG): container finished" podID="5ce81927-d5d1-4d4c-99f9-9e0af2a2a997" containerID="5ec4bce84348d89e4858afd2a0515b719238cabe72000d120ceed47151955b37" exitCode=0 Mar 18 09:05:19.270503 master-0 kubenswrapper[26053]: I0318 09:05:19.269735 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" event={"ID":"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997","Type":"ContainerDied","Data":"5ec4bce84348d89e4858afd2a0515b719238cabe72000d120ceed47151955b37"} Mar 18 09:05:19.296398 master-0 kubenswrapper[26053]: I0318 09:05:19.296341 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 09:05:19.299701 master-0 kubenswrapper[26053]: I0318 09:05:19.299646 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 18 09:05:19.423586 master-0 kubenswrapper[26053]: I0318 09:05:19.423526 26053 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 18 09:05:19.424072 master-0 kubenswrapper[26053]: I0318 09:05:19.424041 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" containerID="cri-o://cd8f1b2378c428693218d79b09a56c9b55b51bb98be0e6bcf8f6074d75fc8fec" gracePeriod=30 Mar 18 09:05:19.424308 master-0 kubenswrapper[26053]: I0318 09:05:19.424249 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" containerID="cri-o://4fc555cd68d5d190723bdb906f024eca28a915e20d6010038a593dff24a564cd" gracePeriod=30 Mar 18 09:05:19.427200 master-0 kubenswrapper[26053]: I0318 09:05:19.425822 26053 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:05:19.427200 master-0 kubenswrapper[26053]: E0318 09:05:19.426253 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 09:05:19.427200 master-0 kubenswrapper[26053]: I0318 09:05:19.426270 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 09:05:19.427200 master-0 kubenswrapper[26053]: E0318 09:05:19.426286 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1fdd0ad-78f6-479c-9fb0-6e124b2fa537" containerName="installer" Mar 18 09:05:19.427200 master-0 kubenswrapper[26053]: I0318 09:05:19.426294 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1fdd0ad-78f6-479c-9fb0-6e124b2fa537" containerName="installer" Mar 18 09:05:19.427200 master-0 kubenswrapper[26053]: E0318 09:05:19.426318 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 18 09:05:19.427200 master-0 kubenswrapper[26053]: I0318 09:05:19.426326 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 18 09:05:19.427200 master-0 kubenswrapper[26053]: I0318 09:05:19.426507 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1fdd0ad-78f6-479c-9fb0-6e124b2fa537" containerName="installer" Mar 18 09:05:19.427200 master-0 kubenswrapper[26053]: I0318 09:05:19.426524 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 09:05:19.427200 master-0 kubenswrapper[26053]: I0318 09:05:19.426548 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 18 09:05:19.427200 master-0 kubenswrapper[26053]: I0318 09:05:19.426590 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 18 09:05:19.427200 master-0 kubenswrapper[26053]: E0318 09:05:19.426745 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 09:05:19.427200 master-0 kubenswrapper[26053]: I0318 09:05:19.426755 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 09:05:19.427200 master-0 kubenswrapper[26053]: E0318 09:05:19.426771 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 18 09:05:19.427200 master-0 kubenswrapper[26053]: I0318 09:05:19.426779 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 18 09:05:19.427200 master-0 kubenswrapper[26053]: I0318 09:05:19.426914 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 18 09:05:19.428717 master-0 kubenswrapper[26053]: I0318 09:05:19.428180 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:05:19.477971 master-0 kubenswrapper[26053]: I0318 09:05:19.477916 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2902db65fe16fd26bf5e57c38292ff3f-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"2902db65fe16fd26bf5e57c38292ff3f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:05:19.478158 master-0 kubenswrapper[26053]: I0318 09:05:19.478091 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2902db65fe16fd26bf5e57c38292ff3f-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"2902db65fe16fd26bf5e57c38292ff3f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:05:19.580063 master-0 kubenswrapper[26053]: I0318 09:05:19.579993 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2902db65fe16fd26bf5e57c38292ff3f-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"2902db65fe16fd26bf5e57c38292ff3f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:05:19.580716 master-0 kubenswrapper[26053]: I0318 09:05:19.580077 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2902db65fe16fd26bf5e57c38292ff3f-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"2902db65fe16fd26bf5e57c38292ff3f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:05:19.580716 master-0 kubenswrapper[26053]: I0318 09:05:19.580195 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2902db65fe16fd26bf5e57c38292ff3f-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"2902db65fe16fd26bf5e57c38292ff3f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:05:19.580716 master-0 kubenswrapper[26053]: I0318 09:05:19.580287 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2902db65fe16fd26bf5e57c38292ff3f-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"2902db65fe16fd26bf5e57c38292ff3f\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:05:19.672936 master-0 kubenswrapper[26053]: I0318 09:05:19.672534 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:05:19.681625 master-0 kubenswrapper[26053]: I0318 09:05:19.680218 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:05:19.723189 master-0 kubenswrapper[26053]: I0318 09:05:19.721721 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:05:19.734635 master-0 kubenswrapper[26053]: I0318 09:05:19.734551 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:05:19.763209 master-0 kubenswrapper[26053]: W0318 09:05:19.763155 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2902db65fe16fd26bf5e57c38292ff3f.slice/crio-2634d9587dfb7ddcd22d3ef9df9d0f7c56487d281b92049d90a36186a1a7a6ce WatchSource:0}: Error finding container 2634d9587dfb7ddcd22d3ef9df9d0f7c56487d281b92049d90a36186a1a7a6ce: Status 404 returned error can't find the container with id 2634d9587dfb7ddcd22d3ef9df9d0f7c56487d281b92049d90a36186a1a7a6ce Mar 18 09:05:19.791431 master-0 kubenswrapper[26053]: I0318 09:05:19.791375 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns9j2\" (UniqueName: \"kubernetes.io/projected/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-kube-api-access-ns9j2\") pod \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " Mar 18 09:05:19.791948 master-0 kubenswrapper[26053]: I0318 09:05:19.791918 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-serving-cert\") pod \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " Mar 18 09:05:19.792289 master-0 kubenswrapper[26053]: I0318 09:05:19.792263 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-cliconfig\") pod \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " Mar 18 09:05:19.792452 master-0 kubenswrapper[26053]: I0318 09:05:19.792427 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-user-template-provider-selection\") pod \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " Mar 18 09:05:19.792616 master-0 kubenswrapper[26053]: I0318 09:05:19.792592 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 09:05:19.792769 master-0 kubenswrapper[26053]: I0318 09:05:19.792747 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 09:05:19.792888 master-0 kubenswrapper[26053]: I0318 09:05:19.792869 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-user-template-error\") pod \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " Mar 18 09:05:19.793003 master-0 kubenswrapper[26053]: I0318 09:05:19.792987 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-audit-policies\") pod \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " Mar 18 09:05:19.794013 master-0 kubenswrapper[26053]: I0318 09:05:19.793196 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-router-certs\") pod \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " Mar 18 09:05:19.794156 master-0 kubenswrapper[26053]: I0318 09:05:19.794078 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997" (UID: "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:05:19.794272 master-0 kubenswrapper[26053]: I0318 09:05:19.794198 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:05:19.794339 master-0 kubenswrapper[26053]: I0318 09:05:19.794216 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config" (OuterVolumeSpecName: "config") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:05:19.794416 master-0 kubenswrapper[26053]: I0318 09:05:19.794394 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-trusted-ca-bundle\") pod \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " Mar 18 09:05:19.794532 master-0 kubenswrapper[26053]: I0318 09:05:19.794515 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-user-template-login\") pod \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " Mar 18 09:05:19.794660 master-0 kubenswrapper[26053]: I0318 09:05:19.794642 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 09:05:19.794763 master-0 kubenswrapper[26053]: I0318 09:05:19.794745 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 09:05:19.794869 master-0 kubenswrapper[26053]: I0318 09:05:19.794852 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-session\") pod \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " Mar 18 09:05:19.794982 master-0 kubenswrapper[26053]: I0318 09:05:19.794964 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-ocp-branding-template\") pod \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " Mar 18 09:05:19.795142 master-0 kubenswrapper[26053]: I0318 09:05:19.794785 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets" (OuterVolumeSpecName: "secrets") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:05:19.795142 master-0 kubenswrapper[26053]: I0318 09:05:19.794785 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:05:19.795255 master-0 kubenswrapper[26053]: I0318 09:05:19.795068 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-audit-dir\") pod \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " Mar 18 09:05:19.795368 master-0 kubenswrapper[26053]: I0318 09:05:19.795313 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 18 09:05:19.795415 master-0 kubenswrapper[26053]: I0318 09:05:19.795379 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs" (OuterVolumeSpecName: "logs") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:05:19.795467 master-0 kubenswrapper[26053]: I0318 09:05:19.795422 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-service-ca\") pod \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\" (UID: \"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997\") " Mar 18 09:05:19.795547 master-0 kubenswrapper[26053]: I0318 09:05:19.795526 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997" (UID: "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:05:19.795805 master-0 kubenswrapper[26053]: I0318 09:05:19.795767 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997" (UID: "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:05:19.797064 master-0 kubenswrapper[26053]: I0318 09:05:19.797000 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997" (UID: "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:05:19.797666 master-0 kubenswrapper[26053]: I0318 09:05:19.797619 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997" (UID: "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:05:19.797925 master-0 kubenswrapper[26053]: I0318 09:05:19.797883 26053 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:19.798038 master-0 kubenswrapper[26053]: I0318 09:05:19.797914 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997" (UID: "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:05:19.798038 master-0 kubenswrapper[26053]: I0318 09:05:19.797938 26053 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:19.798363 master-0 kubenswrapper[26053]: I0318 09:05:19.798051 26053 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:19.798363 master-0 kubenswrapper[26053]: I0318 09:05:19.798078 26053 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:19.798363 master-0 kubenswrapper[26053]: I0318 09:05:19.798100 26053 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:19.798363 master-0 kubenswrapper[26053]: I0318 09:05:19.798182 26053 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:19.798363 master-0 kubenswrapper[26053]: I0318 09:05:19.798203 26053 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:19.798363 master-0 kubenswrapper[26053]: I0318 09:05:19.798221 26053 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:19.798363 master-0 kubenswrapper[26053]: I0318 09:05:19.798269 26053 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:19.798363 master-0 kubenswrapper[26053]: I0318 09:05:19.798286 26053 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:19.798787 master-0 kubenswrapper[26053]: I0318 09:05:19.798373 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997" (UID: "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:05:19.799060 master-0 kubenswrapper[26053]: I0318 09:05:19.798483 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-kube-api-access-ns9j2" (OuterVolumeSpecName: "kube-api-access-ns9j2") pod "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997" (UID: "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997"). InnerVolumeSpecName "kube-api-access-ns9j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:05:19.801138 master-0 kubenswrapper[26053]: I0318 09:05:19.801000 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997" (UID: "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:05:19.801558 master-0 kubenswrapper[26053]: I0318 09:05:19.801500 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997" (UID: "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:05:19.802769 master-0 kubenswrapper[26053]: I0318 09:05:19.802704 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997" (UID: "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:05:19.810198 master-0 kubenswrapper[26053]: I0318 09:05:19.805881 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997" (UID: "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:05:19.819282 master-0 kubenswrapper[26053]: I0318 09:05:19.814762 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997" (UID: "5ce81927-d5d1-4d4c-99f9-9e0af2a2a997"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:05:19.900512 master-0 kubenswrapper[26053]: I0318 09:05:19.900447 26053 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:19.900512 master-0 kubenswrapper[26053]: I0318 09:05:19.900488 26053 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:19.900512 master-0 kubenswrapper[26053]: I0318 09:05:19.900499 26053 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:19.900512 master-0 kubenswrapper[26053]: I0318 09:05:19.900513 26053 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:19.900512 master-0 kubenswrapper[26053]: I0318 09:05:19.900523 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ns9j2\" (UniqueName: \"kubernetes.io/projected/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-kube-api-access-ns9j2\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:19.900947 master-0 kubenswrapper[26053]: I0318 09:05:19.900535 26053 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:19.900947 master-0 kubenswrapper[26053]: I0318 09:05:19.900546 26053 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:19.900947 master-0 kubenswrapper[26053]: I0318 09:05:19.900556 26053 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:19.921503 master-0 kubenswrapper[26053]: I0318 09:05:19.921419 26053 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="bd660d5c-eea7-42b7-ac59-3106a104fd63" Mar 18 09:05:20.283088 master-0 kubenswrapper[26053]: I0318 09:05:20.283014 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"2902db65fe16fd26bf5e57c38292ff3f","Type":"ContainerStarted","Data":"c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc"} Mar 18 09:05:20.283088 master-0 kubenswrapper[26053]: I0318 09:05:20.283061 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"2902db65fe16fd26bf5e57c38292ff3f","Type":"ContainerStarted","Data":"2634d9587dfb7ddcd22d3ef9df9d0f7c56487d281b92049d90a36186a1a7a6ce"} Mar 18 09:05:20.285197 master-0 kubenswrapper[26053]: I0318 09:05:20.285148 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" event={"ID":"5ce81927-d5d1-4d4c-99f9-9e0af2a2a997","Type":"ContainerDied","Data":"2354db6e45a35f5e861f879d34fabf5e99fd18d81643742d89ed59540eb046a6"} Mar 18 09:05:20.285253 master-0 kubenswrapper[26053]: I0318 09:05:20.285222 26053 scope.go:117] "RemoveContainer" containerID="5ec4bce84348d89e4858afd2a0515b719238cabe72000d120ceed47151955b37" Mar 18 09:05:20.287002 master-0 kubenswrapper[26053]: I0318 09:05:20.286937 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-759d994cb6-pm8qx" Mar 18 09:05:20.292364 master-0 kubenswrapper[26053]: I0318 09:05:20.292245 26053 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="4fc555cd68d5d190723bdb906f024eca28a915e20d6010038a593dff24a564cd" exitCode=0 Mar 18 09:05:20.292364 master-0 kubenswrapper[26053]: I0318 09:05:20.292277 26053 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="cd8f1b2378c428693218d79b09a56c9b55b51bb98be0e6bcf8f6074d75fc8fec" exitCode=0 Mar 18 09:05:20.292364 master-0 kubenswrapper[26053]: I0318 09:05:20.292339 26053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a31ec04000ce15454ff93da1265406bc1c4ac93430867ce448a6dc1acd1c1097" Mar 18 09:05:20.292500 master-0 kubenswrapper[26053]: I0318 09:05:20.292376 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 18 09:05:20.294038 master-0 kubenswrapper[26053]: I0318 09:05:20.293707 26053 generic.go:334] "Generic (PLEG): container finished" podID="ea4b43a1-e9cd-44e4-9c79-55c53146d9e8" containerID="486d275d0446fe617ce1f81d234818d3aa4d815534024550c6d720ee5fc67ef9" exitCode=0 Mar 18 09:05:20.294038 master-0 kubenswrapper[26053]: I0318 09:05:20.293733 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"ea4b43a1-e9cd-44e4-9c79-55c53146d9e8","Type":"ContainerDied","Data":"486d275d0446fe617ce1f81d234818d3aa4d815534024550c6d720ee5fc67ef9"} Mar 18 09:05:20.308250 master-0 kubenswrapper[26053]: I0318 09:05:20.308211 26053 scope.go:117] "RemoveContainer" containerID="3251669fc25c1285249f7f12096305de8608ba905c5c140d1ecff87834122e35" Mar 18 09:05:20.331080 master-0 kubenswrapper[26053]: I0318 09:05:20.331041 26053 scope.go:117] "RemoveContainer" containerID="6ee1233c71c5af7063fbaa44082f3735eb10f8e872e4942be3116550d1869f80" Mar 18 09:05:20.343917 master-0 kubenswrapper[26053]: I0318 09:05:20.343519 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-759d994cb6-pm8qx"] Mar 18 09:05:20.353599 master-0 kubenswrapper[26053]: I0318 09:05:20.351960 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-759d994cb6-pm8qx"] Mar 18 09:05:20.741480 master-0 kubenswrapper[26053]: I0318 09:05:20.741428 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46f265536aba6292ead501bc9b49f327" path="/var/lib/kubelet/pods/46f265536aba6292ead501bc9b49f327/volumes" Mar 18 09:05:20.742318 master-0 kubenswrapper[26053]: I0318 09:05:20.742284 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ce81927-d5d1-4d4c-99f9-9e0af2a2a997" path="/var/lib/kubelet/pods/5ce81927-d5d1-4d4c-99f9-9e0af2a2a997/volumes" Mar 18 09:05:20.742955 master-0 kubenswrapper[26053]: I0318 09:05:20.742926 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1fdd0ad-78f6-479c-9fb0-6e124b2fa537" path="/var/lib/kubelet/pods/c1fdd0ad-78f6-479c-9fb0-6e124b2fa537/volumes" Mar 18 09:05:20.743682 master-0 kubenswrapper[26053]: I0318 09:05:20.743657 26053 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 18 09:05:20.770037 master-0 kubenswrapper[26053]: I0318 09:05:20.768814 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 18 09:05:20.770037 master-0 kubenswrapper[26053]: I0318 09:05:20.768870 26053 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="bd660d5c-eea7-42b7-ac59-3106a104fd63" Mar 18 09:05:20.774329 master-0 kubenswrapper[26053]: I0318 09:05:20.774257 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 18 09:05:20.774329 master-0 kubenswrapper[26053]: I0318 09:05:20.774308 26053 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="bd660d5c-eea7-42b7-ac59-3106a104fd63" Mar 18 09:05:21.168984 master-0 kubenswrapper[26053]: I0318 09:05:21.168850 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 18 09:05:21.169299 master-0 kubenswrapper[26053]: I0318 09:05:21.169244 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-3-master-0" podUID="4c2cac96-c192-47dd-9c3a-6dc58a165084" containerName="installer" containerID="cri-o://fcc86bc36370a7bbe86a9bc2aedad52fd6ea53360f20071df19fbd25a1f58504" gracePeriod=30 Mar 18 09:05:21.312091 master-0 kubenswrapper[26053]: I0318 09:05:21.307147 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_4c2cac96-c192-47dd-9c3a-6dc58a165084/installer/0.log" Mar 18 09:05:21.312091 master-0 kubenswrapper[26053]: I0318 09:05:21.307195 26053 generic.go:334] "Generic (PLEG): container finished" podID="4c2cac96-c192-47dd-9c3a-6dc58a165084" containerID="fcc86bc36370a7bbe86a9bc2aedad52fd6ea53360f20071df19fbd25a1f58504" exitCode=1 Mar 18 09:05:21.312091 master-0 kubenswrapper[26053]: I0318 09:05:21.307273 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"4c2cac96-c192-47dd-9c3a-6dc58a165084","Type":"ContainerDied","Data":"fcc86bc36370a7bbe86a9bc2aedad52fd6ea53360f20071df19fbd25a1f58504"} Mar 18 09:05:21.312091 master-0 kubenswrapper[26053]: I0318 09:05:21.311604 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"2902db65fe16fd26bf5e57c38292ff3f","Type":"ContainerStarted","Data":"3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0"} Mar 18 09:05:21.312091 master-0 kubenswrapper[26053]: I0318 09:05:21.311654 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"2902db65fe16fd26bf5e57c38292ff3f","Type":"ContainerStarted","Data":"1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427"} Mar 18 09:05:21.312091 master-0 kubenswrapper[26053]: I0318 09:05:21.311663 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"2902db65fe16fd26bf5e57c38292ff3f","Type":"ContainerStarted","Data":"9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba"} Mar 18 09:05:21.348592 master-0 kubenswrapper[26053]: I0318 09:05:21.346712 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.346692798 podStartE2EDuration="2.346692798s" podCreationTimestamp="2026-03-18 09:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:05:21.341474045 +0000 UTC m=+108.834825426" watchObservedRunningTime="2026-03-18 09:05:21.346692798 +0000 UTC m=+108.840044179" Mar 18 09:05:21.857479 master-0 kubenswrapper[26053]: I0318 09:05:21.857445 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_4c2cac96-c192-47dd-9c3a-6dc58a165084/installer/0.log" Mar 18 09:05:21.857764 master-0 kubenswrapper[26053]: I0318 09:05:21.857746 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:05:21.933750 master-0 kubenswrapper[26053]: I0318 09:05:21.933692 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4c2cac96-c192-47dd-9c3a-6dc58a165084-var-lock\") pod \"4c2cac96-c192-47dd-9c3a-6dc58a165084\" (UID: \"4c2cac96-c192-47dd-9c3a-6dc58a165084\") " Mar 18 09:05:21.933951 master-0 kubenswrapper[26053]: I0318 09:05:21.933782 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c2cac96-c192-47dd-9c3a-6dc58a165084-kube-api-access\") pod \"4c2cac96-c192-47dd-9c3a-6dc58a165084\" (UID: \"4c2cac96-c192-47dd-9c3a-6dc58a165084\") " Mar 18 09:05:21.933951 master-0 kubenswrapper[26053]: I0318 09:05:21.933870 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c2cac96-c192-47dd-9c3a-6dc58a165084-kubelet-dir\") pod \"4c2cac96-c192-47dd-9c3a-6dc58a165084\" (UID: \"4c2cac96-c192-47dd-9c3a-6dc58a165084\") " Mar 18 09:05:21.934378 master-0 kubenswrapper[26053]: I0318 09:05:21.934279 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c2cac96-c192-47dd-9c3a-6dc58a165084-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4c2cac96-c192-47dd-9c3a-6dc58a165084" (UID: "4c2cac96-c192-47dd-9c3a-6dc58a165084"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:05:21.934471 master-0 kubenswrapper[26053]: I0318 09:05:21.934398 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c2cac96-c192-47dd-9c3a-6dc58a165084-var-lock" (OuterVolumeSpecName: "var-lock") pod "4c2cac96-c192-47dd-9c3a-6dc58a165084" (UID: "4c2cac96-c192-47dd-9c3a-6dc58a165084"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:05:21.944028 master-0 kubenswrapper[26053]: I0318 09:05:21.943958 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c2cac96-c192-47dd-9c3a-6dc58a165084-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4c2cac96-c192-47dd-9c3a-6dc58a165084" (UID: "4c2cac96-c192-47dd-9c3a-6dc58a165084"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:05:21.997780 master-0 kubenswrapper[26053]: I0318 09:05:21.986128 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:05:22.035870 master-0 kubenswrapper[26053]: I0318 09:05:22.035664 26053 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c2cac96-c192-47dd-9c3a-6dc58a165084-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:22.035870 master-0 kubenswrapper[26053]: I0318 09:05:22.035708 26053 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4c2cac96-c192-47dd-9c3a-6dc58a165084-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:22.035870 master-0 kubenswrapper[26053]: I0318 09:05:22.035719 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c2cac96-c192-47dd-9c3a-6dc58a165084-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:22.140222 master-0 kubenswrapper[26053]: I0318 09:05:22.140092 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea4b43a1-e9cd-44e4-9c79-55c53146d9e8-kube-api-access\") pod \"ea4b43a1-e9cd-44e4-9c79-55c53146d9e8\" (UID: \"ea4b43a1-e9cd-44e4-9c79-55c53146d9e8\") " Mar 18 09:05:22.140222 master-0 kubenswrapper[26053]: I0318 09:05:22.140203 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea4b43a1-e9cd-44e4-9c79-55c53146d9e8-var-lock\") pod \"ea4b43a1-e9cd-44e4-9c79-55c53146d9e8\" (UID: \"ea4b43a1-e9cd-44e4-9c79-55c53146d9e8\") " Mar 18 09:05:22.140463 master-0 kubenswrapper[26053]: I0318 09:05:22.140250 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea4b43a1-e9cd-44e4-9c79-55c53146d9e8-kubelet-dir\") pod \"ea4b43a1-e9cd-44e4-9c79-55c53146d9e8\" (UID: \"ea4b43a1-e9cd-44e4-9c79-55c53146d9e8\") " Mar 18 09:05:22.140643 master-0 kubenswrapper[26053]: I0318 09:05:22.140602 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4b43a1-e9cd-44e4-9c79-55c53146d9e8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ea4b43a1-e9cd-44e4-9c79-55c53146d9e8" (UID: "ea4b43a1-e9cd-44e4-9c79-55c53146d9e8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:05:22.140715 master-0 kubenswrapper[26053]: I0318 09:05:22.140653 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea4b43a1-e9cd-44e4-9c79-55c53146d9e8-var-lock" (OuterVolumeSpecName: "var-lock") pod "ea4b43a1-e9cd-44e4-9c79-55c53146d9e8" (UID: "ea4b43a1-e9cd-44e4-9c79-55c53146d9e8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:05:22.146209 master-0 kubenswrapper[26053]: I0318 09:05:22.146143 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea4b43a1-e9cd-44e4-9c79-55c53146d9e8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ea4b43a1-e9cd-44e4-9c79-55c53146d9e8" (UID: "ea4b43a1-e9cd-44e4-9c79-55c53146d9e8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:05:22.242496 master-0 kubenswrapper[26053]: I0318 09:05:22.242436 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea4b43a1-e9cd-44e4-9c79-55c53146d9e8-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:22.242496 master-0 kubenswrapper[26053]: I0318 09:05:22.242493 26053 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea4b43a1-e9cd-44e4-9c79-55c53146d9e8-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:22.242753 master-0 kubenswrapper[26053]: I0318 09:05:22.242513 26053 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea4b43a1-e9cd-44e4-9c79-55c53146d9e8-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:05:22.323186 master-0 kubenswrapper[26053]: I0318 09:05:22.322432 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_4c2cac96-c192-47dd-9c3a-6dc58a165084/installer/0.log" Mar 18 09:05:22.323186 master-0 kubenswrapper[26053]: I0318 09:05:22.322555 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"4c2cac96-c192-47dd-9c3a-6dc58a165084","Type":"ContainerDied","Data":"6a1272c41fa1218103cafdda6a224eecf47d8f9487ecbbc736f479d5ef62ea1d"} Mar 18 09:05:22.323186 master-0 kubenswrapper[26053]: I0318 09:05:22.322621 26053 scope.go:117] "RemoveContainer" containerID="fcc86bc36370a7bbe86a9bc2aedad52fd6ea53360f20071df19fbd25a1f58504" Mar 18 09:05:22.323186 master-0 kubenswrapper[26053]: I0318 09:05:22.322620 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 18 09:05:22.330530 master-0 kubenswrapper[26053]: I0318 09:05:22.330469 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"ea4b43a1-e9cd-44e4-9c79-55c53146d9e8","Type":"ContainerDied","Data":"9ee48354727ea7c64e2885f055c6cd978491aac4b6b6159d9bf71084affdae95"} Mar 18 09:05:22.330530 master-0 kubenswrapper[26053]: I0318 09:05:22.330511 26053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ee48354727ea7c64e2885f055c6cd978491aac4b6b6159d9bf71084affdae95" Mar 18 09:05:22.330633 master-0 kubenswrapper[26053]: I0318 09:05:22.330534 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 18 09:05:22.377971 master-0 kubenswrapper[26053]: I0318 09:05:22.377914 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 18 09:05:22.390437 master-0 kubenswrapper[26053]: I0318 09:05:22.390335 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 18 09:05:22.740553 master-0 kubenswrapper[26053]: I0318 09:05:22.738547 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c2cac96-c192-47dd-9c3a-6dc58a165084" path="/var/lib/kubelet/pods/4c2cac96-c192-47dd-9c3a-6dc58a165084/volumes" Mar 18 09:05:25.434556 master-0 kubenswrapper[26053]: I0318 09:05:25.434306 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 09:05:25.435492 master-0 kubenswrapper[26053]: E0318 09:05:25.434849 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c2cac96-c192-47dd-9c3a-6dc58a165084" containerName="installer" Mar 18 09:05:25.435492 master-0 kubenswrapper[26053]: I0318 09:05:25.434886 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c2cac96-c192-47dd-9c3a-6dc58a165084" containerName="installer" Mar 18 09:05:25.435492 master-0 kubenswrapper[26053]: E0318 09:05:25.434951 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea4b43a1-e9cd-44e4-9c79-55c53146d9e8" containerName="installer" Mar 18 09:05:25.435492 master-0 kubenswrapper[26053]: I0318 09:05:25.434968 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea4b43a1-e9cd-44e4-9c79-55c53146d9e8" containerName="installer" Mar 18 09:05:25.435492 master-0 kubenswrapper[26053]: E0318 09:05:25.435004 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ce81927-d5d1-4d4c-99f9-9e0af2a2a997" containerName="oauth-openshift" Mar 18 09:05:25.435492 master-0 kubenswrapper[26053]: I0318 09:05:25.435022 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ce81927-d5d1-4d4c-99f9-9e0af2a2a997" containerName="oauth-openshift" Mar 18 09:05:25.435492 master-0 kubenswrapper[26053]: I0318 09:05:25.435405 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea4b43a1-e9cd-44e4-9c79-55c53146d9e8" containerName="installer" Mar 18 09:05:25.435492 master-0 kubenswrapper[26053]: I0318 09:05:25.435461 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ce81927-d5d1-4d4c-99f9-9e0af2a2a997" containerName="oauth-openshift" Mar 18 09:05:25.435492 master-0 kubenswrapper[26053]: I0318 09:05:25.435488 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c2cac96-c192-47dd-9c3a-6dc58a165084" containerName="installer" Mar 18 09:05:25.439953 master-0 kubenswrapper[26053]: I0318 09:05:25.436661 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:05:25.439953 master-0 kubenswrapper[26053]: I0318 09:05:25.439327 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 09:05:25.439953 master-0 kubenswrapper[26053]: I0318 09:05:25.439333 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-6mb4h" Mar 18 09:05:25.483792 master-0 kubenswrapper[26053]: I0318 09:05:25.479876 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 09:05:25.489044 master-0 kubenswrapper[26053]: I0318 09:05:25.488895 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6030c175-df60-4af1-85b9-78a2cdc9f320-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"6030c175-df60-4af1-85b9-78a2cdc9f320\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:05:25.489211 master-0 kubenswrapper[26053]: I0318 09:05:25.489107 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6030c175-df60-4af1-85b9-78a2cdc9f320-kube-api-access\") pod \"installer-4-master-0\" (UID: \"6030c175-df60-4af1-85b9-78a2cdc9f320\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:05:25.489211 master-0 kubenswrapper[26053]: I0318 09:05:25.489176 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6030c175-df60-4af1-85b9-78a2cdc9f320-var-lock\") pod \"installer-4-master-0\" (UID: \"6030c175-df60-4af1-85b9-78a2cdc9f320\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:05:25.590785 master-0 kubenswrapper[26053]: I0318 09:05:25.590702 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6030c175-df60-4af1-85b9-78a2cdc9f320-kube-api-access\") pod \"installer-4-master-0\" (UID: \"6030c175-df60-4af1-85b9-78a2cdc9f320\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:05:25.590785 master-0 kubenswrapper[26053]: I0318 09:05:25.590768 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6030c175-df60-4af1-85b9-78a2cdc9f320-var-lock\") pod \"installer-4-master-0\" (UID: \"6030c175-df60-4af1-85b9-78a2cdc9f320\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:05:25.591129 master-0 kubenswrapper[26053]: I0318 09:05:25.590820 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6030c175-df60-4af1-85b9-78a2cdc9f320-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"6030c175-df60-4af1-85b9-78a2cdc9f320\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:05:25.591129 master-0 kubenswrapper[26053]: I0318 09:05:25.590909 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6030c175-df60-4af1-85b9-78a2cdc9f320-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"6030c175-df60-4af1-85b9-78a2cdc9f320\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:05:25.591294 master-0 kubenswrapper[26053]: I0318 09:05:25.591201 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6030c175-df60-4af1-85b9-78a2cdc9f320-var-lock\") pod \"installer-4-master-0\" (UID: \"6030c175-df60-4af1-85b9-78a2cdc9f320\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:05:25.619656 master-0 kubenswrapper[26053]: I0318 09:05:25.619592 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6030c175-df60-4af1-85b9-78a2cdc9f320-kube-api-access\") pod \"installer-4-master-0\" (UID: \"6030c175-df60-4af1-85b9-78a2cdc9f320\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:05:25.769754 master-0 kubenswrapper[26053]: I0318 09:05:25.769644 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:05:25.814505 master-0 kubenswrapper[26053]: I0318 09:05:25.814420 26053 patch_prober.go:28] interesting pod/console-546755554c-h5vql container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Mar 18 09:05:25.814505 master-0 kubenswrapper[26053]: I0318 09:05:25.814507 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-546755554c-h5vql" podUID="7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Mar 18 09:05:26.254873 master-0 kubenswrapper[26053]: I0318 09:05:26.254587 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 18 09:05:26.258837 master-0 kubenswrapper[26053]: W0318 09:05:26.258777 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6030c175_df60_4af1_85b9_78a2cdc9f320.slice/crio-688ad15aeb0ec37016da7e8e5e668193d1ced1e6bfcb85501404846b9c8bfdd8 WatchSource:0}: Error finding container 688ad15aeb0ec37016da7e8e5e668193d1ced1e6bfcb85501404846b9c8bfdd8: Status 404 returned error can't find the container with id 688ad15aeb0ec37016da7e8e5e668193d1ced1e6bfcb85501404846b9c8bfdd8 Mar 18 09:05:26.373453 master-0 kubenswrapper[26053]: I0318 09:05:26.373389 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"6030c175-df60-4af1-85b9-78a2cdc9f320","Type":"ContainerStarted","Data":"688ad15aeb0ec37016da7e8e5e668193d1ced1e6bfcb85501404846b9c8bfdd8"} Mar 18 09:05:27.401608 master-0 kubenswrapper[26053]: I0318 09:05:27.399011 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-688488c6-pgjmr"] Mar 18 09:05:27.403092 master-0 kubenswrapper[26053]: I0318 09:05:27.402071 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.407668 master-0 kubenswrapper[26053]: I0318 09:05:27.406175 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 18 09:05:27.407668 master-0 kubenswrapper[26053]: I0318 09:05:27.406363 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-gfnn4" Mar 18 09:05:27.407668 master-0 kubenswrapper[26053]: I0318 09:05:27.406405 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 18 09:05:27.407668 master-0 kubenswrapper[26053]: I0318 09:05:27.407081 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 18 09:05:27.407668 master-0 kubenswrapper[26053]: I0318 09:05:27.407406 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 18 09:05:27.408390 master-0 kubenswrapper[26053]: I0318 09:05:27.408199 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 18 09:05:27.409109 master-0 kubenswrapper[26053]: I0318 09:05:27.409062 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 18 09:05:27.410843 master-0 kubenswrapper[26053]: I0318 09:05:27.409971 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 18 09:05:27.410843 master-0 kubenswrapper[26053]: I0318 09:05:27.410131 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 18 09:05:27.410843 master-0 kubenswrapper[26053]: I0318 09:05:27.410310 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 18 09:05:27.424638 master-0 kubenswrapper[26053]: I0318 09:05:27.420852 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"6030c175-df60-4af1-85b9-78a2cdc9f320","Type":"ContainerStarted","Data":"9091d184d75db9d0c77c8723dec541f4082b03e6911028f0c8d98b6d2257456b"} Mar 18 09:05:27.426981 master-0 kubenswrapper[26053]: I0318 09:05:27.426927 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 18 09:05:27.427370 master-0 kubenswrapper[26053]: I0318 09:05:27.427301 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 18 09:05:27.431685 master-0 kubenswrapper[26053]: I0318 09:05:27.431203 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-688488c6-pgjmr"] Mar 18 09:05:27.446991 master-0 kubenswrapper[26053]: I0318 09:05:27.446906 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 18 09:05:27.455393 master-0 kubenswrapper[26053]: I0318 09:05:27.455324 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 18 09:05:27.475094 master-0 kubenswrapper[26053]: I0318 09:05:27.474997 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=2.47495366 podStartE2EDuration="2.47495366s" podCreationTimestamp="2026-03-18 09:05:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:05:27.473952185 +0000 UTC m=+114.967303566" watchObservedRunningTime="2026-03-18 09:05:27.47495366 +0000 UTC m=+114.968305041" Mar 18 09:05:27.529507 master-0 kubenswrapper[26053]: I0318 09:05:27.529407 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.529778 master-0 kubenswrapper[26053]: I0318 09:05:27.529536 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-user-template-login\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.529778 master-0 kubenswrapper[26053]: I0318 09:05:27.529626 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-system-serving-cert\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.529778 master-0 kubenswrapper[26053]: I0318 09:05:27.529671 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.529778 master-0 kubenswrapper[26053]: I0318 09:05:27.529713 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvcv7\" (UniqueName: \"kubernetes.io/projected/1039a3d2-df65-4e8b-85b1-4f99469f5459-kube-api-access-lvcv7\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.529975 master-0 kubenswrapper[26053]: I0318 09:05:27.529791 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-system-cliconfig\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.529975 master-0 kubenswrapper[26053]: I0318 09:05:27.529947 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-system-router-certs\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.530064 master-0 kubenswrapper[26053]: I0318 09:05:27.530016 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-system-service-ca\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.530103 master-0 kubenswrapper[26053]: I0318 09:05:27.530063 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1039a3d2-df65-4e8b-85b1-4f99469f5459-audit-policies\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.530201 master-0 kubenswrapper[26053]: I0318 09:05:27.530099 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1039a3d2-df65-4e8b-85b1-4f99469f5459-audit-dir\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.530201 master-0 kubenswrapper[26053]: I0318 09:05:27.530141 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-system-session\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.530314 master-0 kubenswrapper[26053]: I0318 09:05:27.530260 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-user-template-error\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.530374 master-0 kubenswrapper[26053]: I0318 09:05:27.530324 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.631768 master-0 kubenswrapper[26053]: I0318 09:05:27.631693 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.632017 master-0 kubenswrapper[26053]: I0318 09:05:27.631809 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-user-template-login\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.632017 master-0 kubenswrapper[26053]: I0318 09:05:27.631894 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-system-serving-cert\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.632017 master-0 kubenswrapper[26053]: I0318 09:05:27.631972 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.632218 master-0 kubenswrapper[26053]: I0318 09:05:27.632044 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvcv7\" (UniqueName: \"kubernetes.io/projected/1039a3d2-df65-4e8b-85b1-4f99469f5459-kube-api-access-lvcv7\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.632218 master-0 kubenswrapper[26053]: I0318 09:05:27.632142 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-system-cliconfig\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.632362 master-0 kubenswrapper[26053]: I0318 09:05:27.632238 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-system-router-certs\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.632432 master-0 kubenswrapper[26053]: I0318 09:05:27.632344 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-system-service-ca\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.632504 master-0 kubenswrapper[26053]: I0318 09:05:27.632477 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1039a3d2-df65-4e8b-85b1-4f99469f5459-audit-policies\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.632609 master-0 kubenswrapper[26053]: I0318 09:05:27.632524 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1039a3d2-df65-4e8b-85b1-4f99469f5459-audit-dir\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.633006 master-0 kubenswrapper[26053]: I0318 09:05:27.632951 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1039a3d2-df65-4e8b-85b1-4f99469f5459-audit-dir\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.633094 master-0 kubenswrapper[26053]: I0318 09:05:27.633032 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-system-session\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.633171 master-0 kubenswrapper[26053]: I0318 09:05:27.633104 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-user-template-error\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.633171 master-0 kubenswrapper[26053]: I0318 09:05:27.633140 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.634423 master-0 kubenswrapper[26053]: I0318 09:05:27.634379 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-system-service-ca\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.634662 master-0 kubenswrapper[26053]: I0318 09:05:27.634624 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1039a3d2-df65-4e8b-85b1-4f99469f5459-audit-policies\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.634741 master-0 kubenswrapper[26053]: I0318 09:05:27.634672 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-system-cliconfig\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.635240 master-0 kubenswrapper[26053]: I0318 09:05:27.635176 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.637185 master-0 kubenswrapper[26053]: I0318 09:05:27.637121 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-system-router-certs\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.637185 master-0 kubenswrapper[26053]: I0318 09:05:27.637101 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-user-template-login\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.637492 master-0 kubenswrapper[26053]: I0318 09:05:27.637447 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-system-session\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.638430 master-0 kubenswrapper[26053]: I0318 09:05:27.638379 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.640097 master-0 kubenswrapper[26053]: I0318 09:05:27.640067 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-system-serving-cert\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.641145 master-0 kubenswrapper[26053]: I0318 09:05:27.641101 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-user-template-error\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.645979 master-0 kubenswrapper[26053]: I0318 09:05:27.645889 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1039a3d2-df65-4e8b-85b1-4f99469f5459-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.663035 master-0 kubenswrapper[26053]: I0318 09:05:27.662903 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvcv7\" (UniqueName: \"kubernetes.io/projected/1039a3d2-df65-4e8b-85b1-4f99469f5459-kube-api-access-lvcv7\") pod \"oauth-openshift-688488c6-pgjmr\" (UID: \"1039a3d2-df65-4e8b-85b1-4f99469f5459\") " pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.755658 master-0 kubenswrapper[26053]: I0318 09:05:27.755189 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:27.858276 master-0 kubenswrapper[26053]: I0318 09:05:27.858181 26053 patch_prober.go:28] interesting pod/console-68c5849c7c-lqm2r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.84:8443/health\": dial tcp 10.128.0.84:8443: connect: connection refused" start-of-body= Mar 18 09:05:27.858491 master-0 kubenswrapper[26053]: I0318 09:05:27.858330 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68c5849c7c-lqm2r" podUID="32425206-41b7-427e-8773-f650801d9d76" containerName="console" probeResult="failure" output="Get \"https://10.128.0.84:8443/health\": dial tcp 10.128.0.84:8443: connect: connection refused" Mar 18 09:05:28.340600 master-0 kubenswrapper[26053]: I0318 09:05:28.340530 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-688488c6-pgjmr"] Mar 18 09:05:28.345298 master-0 kubenswrapper[26053]: W0318 09:05:28.345248 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1039a3d2_df65_4e8b_85b1_4f99469f5459.slice/crio-3a786995b43b772a963d19bcc4903e9fc379d468d0fa770e5b81e7e87f8af3c4 WatchSource:0}: Error finding container 3a786995b43b772a963d19bcc4903e9fc379d468d0fa770e5b81e7e87f8af3c4: Status 404 returned error can't find the container with id 3a786995b43b772a963d19bcc4903e9fc379d468d0fa770e5b81e7e87f8af3c4 Mar 18 09:05:28.430612 master-0 kubenswrapper[26053]: I0318 09:05:28.430532 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" event={"ID":"1039a3d2-df65-4e8b-85b1-4f99469f5459","Type":"ContainerStarted","Data":"3a786995b43b772a963d19bcc4903e9fc379d468d0fa770e5b81e7e87f8af3c4"} Mar 18 09:05:29.440744 master-0 kubenswrapper[26053]: I0318 09:05:29.440671 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" event={"ID":"1039a3d2-df65-4e8b-85b1-4f99469f5459","Type":"ContainerStarted","Data":"100b511c933ec2228d7a0e4e3afc7e4659c4a36c171cca158a1ea00395e32964"} Mar 18 09:05:29.441494 master-0 kubenswrapper[26053]: I0318 09:05:29.441025 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:29.448360 master-0 kubenswrapper[26053]: I0318 09:05:29.448311 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" Mar 18 09:05:29.467523 master-0 kubenswrapper[26053]: I0318 09:05:29.467432 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-688488c6-pgjmr" podStartSLOduration=36.46741506 podStartE2EDuration="36.46741506s" podCreationTimestamp="2026-03-18 09:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:05:29.46425241 +0000 UTC m=+116.957603811" watchObservedRunningTime="2026-03-18 09:05:29.46741506 +0000 UTC m=+116.960766441" Mar 18 09:05:29.722912 master-0 kubenswrapper[26053]: I0318 09:05:29.722852 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:05:29.723107 master-0 kubenswrapper[26053]: I0318 09:05:29.722932 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:05:29.723107 master-0 kubenswrapper[26053]: I0318 09:05:29.722954 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:05:29.723107 master-0 kubenswrapper[26053]: I0318 09:05:29.722970 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:05:29.723358 master-0 kubenswrapper[26053]: I0318 09:05:29.723298 26053 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 09:05:29.723496 master-0 kubenswrapper[26053]: I0318 09:05:29.723414 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="2902db65fe16fd26bf5e57c38292ff3f" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 09:05:29.730636 master-0 kubenswrapper[26053]: I0318 09:05:29.730545 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:05:30.456454 master-0 kubenswrapper[26053]: I0318 09:05:30.456375 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:05:35.813771 master-0 kubenswrapper[26053]: I0318 09:05:35.813674 26053 patch_prober.go:28] interesting pod/console-546755554c-h5vql container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Mar 18 09:05:35.813771 master-0 kubenswrapper[26053]: I0318 09:05:35.813784 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-546755554c-h5vql" podUID="7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Mar 18 09:05:37.855596 master-0 kubenswrapper[26053]: I0318 09:05:37.855467 26053 patch_prober.go:28] interesting pod/console-68c5849c7c-lqm2r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.84:8443/health\": dial tcp 10.128.0.84:8443: connect: connection refused" start-of-body= Mar 18 09:05:37.856906 master-0 kubenswrapper[26053]: I0318 09:05:37.855560 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68c5849c7c-lqm2r" podUID="32425206-41b7-427e-8773-f650801d9d76" containerName="console" probeResult="failure" output="Get \"https://10.128.0.84:8443/health\": dial tcp 10.128.0.84:8443: connect: connection refused" Mar 18 09:05:39.730419 master-0 kubenswrapper[26053]: I0318 09:05:39.730345 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:05:39.737364 master-0 kubenswrapper[26053]: I0318 09:05:39.737321 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:05:45.371777 master-0 kubenswrapper[26053]: I0318 09:05:45.371656 26053 scope.go:117] "RemoveContainer" containerID="7c2aae6fa53257e6d8c7e1c783c29a93037db597eccbd9c6d53d330e1c671296" Mar 18 09:05:45.812384 master-0 kubenswrapper[26053]: I0318 09:05:45.812258 26053 patch_prober.go:28] interesting pod/console-546755554c-h5vql container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Mar 18 09:05:45.812384 master-0 kubenswrapper[26053]: I0318 09:05:45.812356 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-546755554c-h5vql" podUID="7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Mar 18 09:05:47.854717 master-0 kubenswrapper[26053]: I0318 09:05:47.854618 26053 patch_prober.go:28] interesting pod/console-68c5849c7c-lqm2r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.84:8443/health\": dial tcp 10.128.0.84:8443: connect: connection refused" start-of-body= Mar 18 09:05:47.854717 master-0 kubenswrapper[26053]: I0318 09:05:47.854697 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68c5849c7c-lqm2r" podUID="32425206-41b7-427e-8773-f650801d9d76" containerName="console" probeResult="failure" output="Get \"https://10.128.0.84:8443/health\": dial tcp 10.128.0.84:8443: connect: connection refused" Mar 18 09:05:55.813899 master-0 kubenswrapper[26053]: I0318 09:05:55.813833 26053 patch_prober.go:28] interesting pod/console-546755554c-h5vql container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Mar 18 09:05:55.814537 master-0 kubenswrapper[26053]: I0318 09:05:55.813908 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-546755554c-h5vql" podUID="7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Mar 18 09:05:57.855103 master-0 kubenswrapper[26053]: I0318 09:05:57.855006 26053 patch_prober.go:28] interesting pod/console-68c5849c7c-lqm2r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.84:8443/health\": dial tcp 10.128.0.84:8443: connect: connection refused" start-of-body= Mar 18 09:05:57.855837 master-0 kubenswrapper[26053]: I0318 09:05:57.855110 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-68c5849c7c-lqm2r" podUID="32425206-41b7-427e-8773-f650801d9d76" containerName="console" probeResult="failure" output="Get \"https://10.128.0.84:8443/health\": dial tcp 10.128.0.84:8443: connect: connection refused" Mar 18 09:05:58.719615 master-0 kubenswrapper[26053]: I0318 09:05:58.719411 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-lds2c"] Mar 18 09:05:58.721098 master-0 kubenswrapper[26053]: I0318 09:05:58.720225 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lds2c" Mar 18 09:05:58.724693 master-0 kubenswrapper[26053]: I0318 09:05:58.722898 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 18 09:05:58.724693 master-0 kubenswrapper[26053]: I0318 09:05:58.723096 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-qwxs4" Mar 18 09:05:58.875906 master-0 kubenswrapper[26053]: I0318 09:05:58.875840 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb02136a-629f-450c-bd13-4287849188c6-host\") pod \"node-ca-lds2c\" (UID: \"cb02136a-629f-450c-bd13-4287849188c6\") " pod="openshift-image-registry/node-ca-lds2c" Mar 18 09:05:58.876501 master-0 kubenswrapper[26053]: I0318 09:05:58.876008 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lh24\" (UniqueName: \"kubernetes.io/projected/cb02136a-629f-450c-bd13-4287849188c6-kube-api-access-2lh24\") pod \"node-ca-lds2c\" (UID: \"cb02136a-629f-450c-bd13-4287849188c6\") " pod="openshift-image-registry/node-ca-lds2c" Mar 18 09:05:58.876501 master-0 kubenswrapper[26053]: I0318 09:05:58.876075 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cb02136a-629f-450c-bd13-4287849188c6-serviceca\") pod \"node-ca-lds2c\" (UID: \"cb02136a-629f-450c-bd13-4287849188c6\") " pod="openshift-image-registry/node-ca-lds2c" Mar 18 09:05:58.978015 master-0 kubenswrapper[26053]: I0318 09:05:58.977879 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb02136a-629f-450c-bd13-4287849188c6-host\") pod \"node-ca-lds2c\" (UID: \"cb02136a-629f-450c-bd13-4287849188c6\") " pod="openshift-image-registry/node-ca-lds2c" Mar 18 09:05:58.978015 master-0 kubenswrapper[26053]: I0318 09:05:58.977953 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lh24\" (UniqueName: \"kubernetes.io/projected/cb02136a-629f-450c-bd13-4287849188c6-kube-api-access-2lh24\") pod \"node-ca-lds2c\" (UID: \"cb02136a-629f-450c-bd13-4287849188c6\") " pod="openshift-image-registry/node-ca-lds2c" Mar 18 09:05:58.978015 master-0 kubenswrapper[26053]: I0318 09:05:58.977990 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cb02136a-629f-450c-bd13-4287849188c6-serviceca\") pod \"node-ca-lds2c\" (UID: \"cb02136a-629f-450c-bd13-4287849188c6\") " pod="openshift-image-registry/node-ca-lds2c" Mar 18 09:05:58.978307 master-0 kubenswrapper[26053]: I0318 09:05:58.978013 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cb02136a-629f-450c-bd13-4287849188c6-host\") pod \"node-ca-lds2c\" (UID: \"cb02136a-629f-450c-bd13-4287849188c6\") " pod="openshift-image-registry/node-ca-lds2c" Mar 18 09:05:58.978811 master-0 kubenswrapper[26053]: I0318 09:05:58.978765 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cb02136a-629f-450c-bd13-4287849188c6-serviceca\") pod \"node-ca-lds2c\" (UID: \"cb02136a-629f-450c-bd13-4287849188c6\") " pod="openshift-image-registry/node-ca-lds2c" Mar 18 09:05:59.005683 master-0 kubenswrapper[26053]: I0318 09:05:59.005257 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lh24\" (UniqueName: \"kubernetes.io/projected/cb02136a-629f-450c-bd13-4287849188c6-kube-api-access-2lh24\") pod \"node-ca-lds2c\" (UID: \"cb02136a-629f-450c-bd13-4287849188c6\") " pod="openshift-image-registry/node-ca-lds2c" Mar 18 09:05:59.068026 master-0 kubenswrapper[26053]: I0318 09:05:59.067968 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lds2c" Mar 18 09:05:59.085526 master-0 kubenswrapper[26053]: W0318 09:05:59.085474 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb02136a_629f_450c_bd13_4287849188c6.slice/crio-7d37310d16ccd445cf75cb2546dd086364b7fbaaebd0cdd50dec09299f7f1530 WatchSource:0}: Error finding container 7d37310d16ccd445cf75cb2546dd086364b7fbaaebd0cdd50dec09299f7f1530: Status 404 returned error can't find the container with id 7d37310d16ccd445cf75cb2546dd086364b7fbaaebd0cdd50dec09299f7f1530 Mar 18 09:05:59.698491 master-0 kubenswrapper[26053]: I0318 09:05:59.698443 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lds2c" event={"ID":"cb02136a-629f-450c-bd13-4287849188c6","Type":"ContainerStarted","Data":"7d37310d16ccd445cf75cb2546dd086364b7fbaaebd0cdd50dec09299f7f1530"} Mar 18 09:05:59.868595 master-0 kubenswrapper[26053]: I0318 09:05:59.862831 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-68c5849c7c-lqm2r"] Mar 18 09:05:59.906246 master-0 kubenswrapper[26053]: I0318 09:05:59.906191 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7748c6b99d-fkjm5"] Mar 18 09:05:59.914602 master-0 kubenswrapper[26053]: I0318 09:05:59.908162 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:05:59.919078 master-0 kubenswrapper[26053]: I0318 09:05:59.918918 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7748c6b99d-fkjm5"] Mar 18 09:05:59.990283 master-0 kubenswrapper[26053]: I0318 09:05:59.990197 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a611129-8d70-4618-8512-8e0a3491353e-trusted-ca-bundle\") pod \"console-7748c6b99d-fkjm5\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:05:59.990501 master-0 kubenswrapper[26053]: I0318 09:05:59.990310 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a611129-8d70-4618-8512-8e0a3491353e-console-serving-cert\") pod \"console-7748c6b99d-fkjm5\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:05:59.990501 master-0 kubenswrapper[26053]: I0318 09:05:59.990345 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a611129-8d70-4618-8512-8e0a3491353e-service-ca\") pod \"console-7748c6b99d-fkjm5\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:05:59.990634 master-0 kubenswrapper[26053]: I0318 09:05:59.990606 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a611129-8d70-4618-8512-8e0a3491353e-console-config\") pod \"console-7748c6b99d-fkjm5\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:05:59.991590 master-0 kubenswrapper[26053]: I0318 09:05:59.990645 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a611129-8d70-4618-8512-8e0a3491353e-oauth-serving-cert\") pod \"console-7748c6b99d-fkjm5\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:05:59.991590 master-0 kubenswrapper[26053]: I0318 09:05:59.990781 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc7f8\" (UniqueName: \"kubernetes.io/projected/6a611129-8d70-4618-8512-8e0a3491353e-kube-api-access-wc7f8\") pod \"console-7748c6b99d-fkjm5\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:05:59.991590 master-0 kubenswrapper[26053]: I0318 09:05:59.990821 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a611129-8d70-4618-8512-8e0a3491353e-console-oauth-config\") pod \"console-7748c6b99d-fkjm5\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:06:00.093280 master-0 kubenswrapper[26053]: I0318 09:06:00.093225 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a611129-8d70-4618-8512-8e0a3491353e-console-oauth-config\") pod \"console-7748c6b99d-fkjm5\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:06:00.093474 master-0 kubenswrapper[26053]: I0318 09:06:00.093293 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a611129-8d70-4618-8512-8e0a3491353e-trusted-ca-bundle\") pod \"console-7748c6b99d-fkjm5\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:06:00.093474 master-0 kubenswrapper[26053]: I0318 09:06:00.093319 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a611129-8d70-4618-8512-8e0a3491353e-console-serving-cert\") pod \"console-7748c6b99d-fkjm5\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:06:00.093474 master-0 kubenswrapper[26053]: I0318 09:06:00.093338 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a611129-8d70-4618-8512-8e0a3491353e-service-ca\") pod \"console-7748c6b99d-fkjm5\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:06:00.093474 master-0 kubenswrapper[26053]: I0318 09:06:00.093389 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a611129-8d70-4618-8512-8e0a3491353e-console-config\") pod \"console-7748c6b99d-fkjm5\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:06:00.093474 master-0 kubenswrapper[26053]: I0318 09:06:00.093407 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a611129-8d70-4618-8512-8e0a3491353e-oauth-serving-cert\") pod \"console-7748c6b99d-fkjm5\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:06:00.093474 master-0 kubenswrapper[26053]: I0318 09:06:00.093440 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc7f8\" (UniqueName: \"kubernetes.io/projected/6a611129-8d70-4618-8512-8e0a3491353e-kube-api-access-wc7f8\") pod \"console-7748c6b99d-fkjm5\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:06:00.094605 master-0 kubenswrapper[26053]: I0318 09:06:00.094550 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a611129-8d70-4618-8512-8e0a3491353e-service-ca\") pod \"console-7748c6b99d-fkjm5\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:06:00.095159 master-0 kubenswrapper[26053]: I0318 09:06:00.095135 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a611129-8d70-4618-8512-8e0a3491353e-console-config\") pod \"console-7748c6b99d-fkjm5\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:06:00.095890 master-0 kubenswrapper[26053]: I0318 09:06:00.095672 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a611129-8d70-4618-8512-8e0a3491353e-oauth-serving-cert\") pod \"console-7748c6b99d-fkjm5\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:06:00.096674 master-0 kubenswrapper[26053]: I0318 09:06:00.096619 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a611129-8d70-4618-8512-8e0a3491353e-console-serving-cert\") pod \"console-7748c6b99d-fkjm5\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:06:00.097024 master-0 kubenswrapper[26053]: I0318 09:06:00.096992 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a611129-8d70-4618-8512-8e0a3491353e-trusted-ca-bundle\") pod \"console-7748c6b99d-fkjm5\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:06:00.098604 master-0 kubenswrapper[26053]: I0318 09:06:00.098214 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a611129-8d70-4618-8512-8e0a3491353e-console-oauth-config\") pod \"console-7748c6b99d-fkjm5\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:06:00.111550 master-0 kubenswrapper[26053]: I0318 09:06:00.111511 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc7f8\" (UniqueName: \"kubernetes.io/projected/6a611129-8d70-4618-8512-8e0a3491353e-kube-api-access-wc7f8\") pod \"console-7748c6b99d-fkjm5\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:06:00.277802 master-0 kubenswrapper[26053]: I0318 09:06:00.277681 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:06:00.689336 master-0 kubenswrapper[26053]: I0318 09:06:00.689204 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7748c6b99d-fkjm5"] Mar 18 09:06:00.721546 master-0 kubenswrapper[26053]: I0318 09:06:00.721484 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7748c6b99d-fkjm5" event={"ID":"6a611129-8d70-4618-8512-8e0a3491353e","Type":"ContainerStarted","Data":"008295768abec88e92581978488e6f15584ca83dc897a756588e8c22ad9deff9"} Mar 18 09:06:01.739721 master-0 kubenswrapper[26053]: I0318 09:06:01.739542 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7748c6b99d-fkjm5" event={"ID":"6a611129-8d70-4618-8512-8e0a3491353e","Type":"ContainerStarted","Data":"b9d58a75817e6af4f01b4e3ee5e5f66dccf84ea71c0c5c545d67e0ab6e4908fc"} Mar 18 09:06:02.749043 master-0 kubenswrapper[26053]: I0318 09:06:02.748950 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lds2c" event={"ID":"cb02136a-629f-450c-bd13-4287849188c6","Type":"ContainerStarted","Data":"4a64722413386fc3de17e7aaa06ac1ee88a6767178ccb492728b07a40e50875c"} Mar 18 09:06:02.779872 master-0 kubenswrapper[26053]: I0318 09:06:02.779769 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7748c6b99d-fkjm5" podStartSLOduration=3.7797452099999997 podStartE2EDuration="3.77974521s" podCreationTimestamp="2026-03-18 09:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:06:01.76290011 +0000 UTC m=+149.256251551" watchObservedRunningTime="2026-03-18 09:06:02.77974521 +0000 UTC m=+150.273096591" Mar 18 09:06:02.780229 master-0 kubenswrapper[26053]: I0318 09:06:02.780005 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-lds2c" podStartSLOduration=1.9545771589999998 podStartE2EDuration="4.779996866s" podCreationTimestamp="2026-03-18 09:05:58 +0000 UTC" firstStartedPulling="2026-03-18 09:05:59.087072323 +0000 UTC m=+146.580423704" lastFinishedPulling="2026-03-18 09:06:01.91249201 +0000 UTC m=+149.405843411" observedRunningTime="2026-03-18 09:06:02.774541727 +0000 UTC m=+150.267893108" watchObservedRunningTime="2026-03-18 09:06:02.779996866 +0000 UTC m=+150.273348257" Mar 18 09:06:04.824543 master-0 kubenswrapper[26053]: I0318 09:06:04.824484 26053 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 09:06:04.826074 master-0 kubenswrapper[26053]: I0318 09:06:04.825992 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="kube-apiserver" containerID="cri-o://f7758a746eb691fed619094369d219bf54bedd1f477e550165239c0a34de5c0f" gracePeriod=15 Mar 18 09:06:04.826346 master-0 kubenswrapper[26053]: I0318 09:06:04.826252 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="kube-apiserver-cert-syncer" containerID="cri-o://51c70143b4526577c0b7ded62575d5c728e020841c7047685596b1ab541784be" gracePeriod=15 Mar 18 09:06:04.826471 master-0 kubenswrapper[26053]: I0318 09:06:04.826044 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="kube-apiserver-check-endpoints" containerID="cri-o://92d35a6a717c4777bb6b537ec9a6f0bb847acc1aa38b242c3d1cac7bc90efd44" gracePeriod=15 Mar 18 09:06:04.826545 master-0 kubenswrapper[26053]: I0318 09:06:04.826181 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://968ddd6b79e0d2d59e973bbf58446176d7d2eb085b2f0904548bb84e2c0df9ca" gracePeriod=15 Mar 18 09:06:04.826671 master-0 kubenswrapper[26053]: I0318 09:06:04.826247 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://907b93d88590538b76b9f3e15155463f097093d928671e9cdef117547df54cfc" gracePeriod=15 Mar 18 09:06:04.826671 master-0 kubenswrapper[26053]: I0318 09:06:04.826204 26053 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 09:06:04.827140 master-0 kubenswrapper[26053]: E0318 09:06:04.827057 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="kube-apiserver-cert-syncer" Mar 18 09:06:04.827140 master-0 kubenswrapper[26053]: I0318 09:06:04.827099 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="kube-apiserver-cert-syncer" Mar 18 09:06:04.827140 master-0 kubenswrapper[26053]: E0318 09:06:04.827117 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="setup" Mar 18 09:06:04.827140 master-0 kubenswrapper[26053]: I0318 09:06:04.827130 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="setup" Mar 18 09:06:04.828146 master-0 kubenswrapper[26053]: E0318 09:06:04.827153 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="kube-apiserver" Mar 18 09:06:04.828146 master-0 kubenswrapper[26053]: I0318 09:06:04.827166 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="kube-apiserver" Mar 18 09:06:04.828146 master-0 kubenswrapper[26053]: E0318 09:06:04.827194 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="kube-apiserver-insecure-readyz" Mar 18 09:06:04.828146 master-0 kubenswrapper[26053]: I0318 09:06:04.827206 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="kube-apiserver-insecure-readyz" Mar 18 09:06:04.828146 master-0 kubenswrapper[26053]: E0318 09:06:04.827234 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="kube-apiserver-check-endpoints" Mar 18 09:06:04.828146 master-0 kubenswrapper[26053]: I0318 09:06:04.827247 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="kube-apiserver-check-endpoints" Mar 18 09:06:04.828146 master-0 kubenswrapper[26053]: E0318 09:06:04.827276 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 09:06:04.828146 master-0 kubenswrapper[26053]: I0318 09:06:04.827288 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 09:06:04.828146 master-0 kubenswrapper[26053]: I0318 09:06:04.827479 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="kube-apiserver-check-endpoints" Mar 18 09:06:04.828146 master-0 kubenswrapper[26053]: I0318 09:06:04.827501 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="kube-apiserver" Mar 18 09:06:04.828146 master-0 kubenswrapper[26053]: I0318 09:06:04.827535 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 09:06:04.828146 master-0 kubenswrapper[26053]: I0318 09:06:04.827560 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="kube-apiserver-cert-syncer" Mar 18 09:06:04.828146 master-0 kubenswrapper[26053]: I0318 09:06:04.827601 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="kube-apiserver-insecure-readyz" Mar 18 09:06:04.828146 master-0 kubenswrapper[26053]: I0318 09:06:04.827623 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="kube-apiserver-check-endpoints" Mar 18 09:06:04.828146 master-0 kubenswrapper[26053]: E0318 09:06:04.827821 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="kube-apiserver-check-endpoints" Mar 18 09:06:04.828146 master-0 kubenswrapper[26053]: I0318 09:06:04.827835 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac3507630eeeca1ec26dca5ed036e3bb" containerName="kube-apiserver-check-endpoints" Mar 18 09:06:04.841472 master-0 kubenswrapper[26053]: I0318 09:06:04.841389 26053 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 09:06:04.856547 master-0 kubenswrapper[26053]: I0318 09:06:04.846438 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:04.857818 master-0 kubenswrapper[26053]: I0318 09:06:04.857672 26053 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="ac3507630eeeca1ec26dca5ed036e3bb" podUID="7d5ce05b3d592e63f1f92202d52b9635" Mar 18 09:06:04.969812 master-0 kubenswrapper[26053]: I0318 09:06:04.969739 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:04.969942 master-0 kubenswrapper[26053]: I0318 09:06:04.969854 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:04.969942 master-0 kubenswrapper[26053]: I0318 09:06:04.969896 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:04.969942 master-0 kubenswrapper[26053]: I0318 09:06:04.969919 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:04.970053 master-0 kubenswrapper[26053]: I0318 09:06:04.969947 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:04.970053 master-0 kubenswrapper[26053]: I0318 09:06:04.969989 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:04.970053 master-0 kubenswrapper[26053]: I0318 09:06:04.970035 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:04.970144 master-0 kubenswrapper[26053]: I0318 09:06:04.970076 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:05.027541 master-0 kubenswrapper[26053]: E0318 09:06:05.027470 26053 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:05.071716 master-0 kubenswrapper[26053]: I0318 09:06:05.071650 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:05.071905 master-0 kubenswrapper[26053]: I0318 09:06:05.071732 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:05.071905 master-0 kubenswrapper[26053]: I0318 09:06:05.071769 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:05.071905 master-0 kubenswrapper[26053]: I0318 09:06:05.071806 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:05.071905 master-0 kubenswrapper[26053]: I0318 09:06:05.071806 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:05.072032 master-0 kubenswrapper[26053]: I0318 09:06:05.071946 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:05.072083 master-0 kubenswrapper[26053]: I0318 09:06:05.072055 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:05.072083 master-0 kubenswrapper[26053]: I0318 09:06:05.072052 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:05.072153 master-0 kubenswrapper[26053]: I0318 09:06:05.072114 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:05.072153 master-0 kubenswrapper[26053]: I0318 09:06:05.072124 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:05.072355 master-0 kubenswrapper[26053]: I0318 09:06:05.072156 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:05.072355 master-0 kubenswrapper[26053]: I0318 09:06:05.072191 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:05.072355 master-0 kubenswrapper[26053]: I0318 09:06:05.072298 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:05.072551 master-0 kubenswrapper[26053]: I0318 09:06:05.072402 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:05.072551 master-0 kubenswrapper[26053]: I0318 09:06:05.072440 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:05.072657 master-0 kubenswrapper[26053]: I0318 09:06:05.072548 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:05.329135 master-0 kubenswrapper[26053]: I0318 09:06:05.329060 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:05.363284 master-0 kubenswrapper[26053]: W0318 09:06:05.362843 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fb4ea7f83036d9c6adf3454fc7e9db.slice/crio-04fe2e8760a677bd0962315d545927f3c7ccb6b21b89d073ad1dc3b60b41064f WatchSource:0}: Error finding container 04fe2e8760a677bd0962315d545927f3c7ccb6b21b89d073ad1dc3b60b41064f: Status 404 returned error can't find the container with id 04fe2e8760a677bd0962315d545927f3c7ccb6b21b89d073ad1dc3b60b41064f Mar 18 09:06:05.369800 master-0 kubenswrapper[26053]: E0318 09:06:05.369511 26053 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189de439d97c8fda openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:16fb4ea7f83036d9c6adf3454fc7e9db,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:06:05.366931418 +0000 UTC m=+152.860282839,LastTimestamp:2026-03-18 09:06:05.366931418 +0000 UTC m=+152.860282839,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:06:05.771512 master-0 kubenswrapper[26053]: I0318 09:06:05.771445 26053 generic.go:334] "Generic (PLEG): container finished" podID="6030c175-df60-4af1-85b9-78a2cdc9f320" containerID="9091d184d75db9d0c77c8723dec541f4082b03e6911028f0c8d98b6d2257456b" exitCode=0 Mar 18 09:06:05.771783 master-0 kubenswrapper[26053]: I0318 09:06:05.771507 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"6030c175-df60-4af1-85b9-78a2cdc9f320","Type":"ContainerDied","Data":"9091d184d75db9d0c77c8723dec541f4082b03e6911028f0c8d98b6d2257456b"} Mar 18 09:06:05.774354 master-0 kubenswrapper[26053]: I0318 09:06:05.774307 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_ac3507630eeeca1ec26dca5ed036e3bb/kube-apiserver-check-endpoints/0.log" Mar 18 09:06:05.774690 master-0 kubenswrapper[26053]: I0318 09:06:05.774635 26053 status_manager.go:851] "Failed to get status for pod" podUID="6030c175-df60-4af1-85b9-78a2cdc9f320" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:05.775714 master-0 kubenswrapper[26053]: I0318 09:06:05.775688 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_ac3507630eeeca1ec26dca5ed036e3bb/kube-apiserver-cert-syncer/0.log" Mar 18 09:06:05.776517 master-0 kubenswrapper[26053]: I0318 09:06:05.776475 26053 generic.go:334] "Generic (PLEG): container finished" podID="ac3507630eeeca1ec26dca5ed036e3bb" containerID="92d35a6a717c4777bb6b537ec9a6f0bb847acc1aa38b242c3d1cac7bc90efd44" exitCode=0 Mar 18 09:06:05.776517 master-0 kubenswrapper[26053]: I0318 09:06:05.776499 26053 generic.go:334] "Generic (PLEG): container finished" podID="ac3507630eeeca1ec26dca5ed036e3bb" containerID="968ddd6b79e0d2d59e973bbf58446176d7d2eb085b2f0904548bb84e2c0df9ca" exitCode=0 Mar 18 09:06:05.776517 master-0 kubenswrapper[26053]: I0318 09:06:05.776508 26053 generic.go:334] "Generic (PLEG): container finished" podID="ac3507630eeeca1ec26dca5ed036e3bb" containerID="907b93d88590538b76b9f3e15155463f097093d928671e9cdef117547df54cfc" exitCode=0 Mar 18 09:06:05.776517 master-0 kubenswrapper[26053]: I0318 09:06:05.776516 26053 generic.go:334] "Generic (PLEG): container finished" podID="ac3507630eeeca1ec26dca5ed036e3bb" containerID="51c70143b4526577c0b7ded62575d5c728e020841c7047685596b1ab541784be" exitCode=2 Mar 18 09:06:05.776913 master-0 kubenswrapper[26053]: I0318 09:06:05.776592 26053 scope.go:117] "RemoveContainer" containerID="b98c563bab7682462c40e7da7e26ff18216a7a69aec7a61033377ca04547a6d0" Mar 18 09:06:05.779450 master-0 kubenswrapper[26053]: I0318 09:06:05.778275 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"16fb4ea7f83036d9c6adf3454fc7e9db","Type":"ContainerStarted","Data":"17fe525ef9fd969ea224700d998daa2ed4c945cd5dea489ea725d4fcd88fbd4a"} Mar 18 09:06:05.779450 master-0 kubenswrapper[26053]: I0318 09:06:05.778312 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"16fb4ea7f83036d9c6adf3454fc7e9db","Type":"ContainerStarted","Data":"04fe2e8760a677bd0962315d545927f3c7ccb6b21b89d073ad1dc3b60b41064f"} Mar 18 09:06:05.779450 master-0 kubenswrapper[26053]: E0318 09:06:05.779049 26053 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:05.779661 master-0 kubenswrapper[26053]: I0318 09:06:05.779605 26053 status_manager.go:851] "Failed to get status for pod" podUID="6030c175-df60-4af1-85b9-78a2cdc9f320" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:05.813095 master-0 kubenswrapper[26053]: I0318 09:06:05.813050 26053 patch_prober.go:28] interesting pod/console-546755554c-h5vql container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Mar 18 09:06:05.813306 master-0 kubenswrapper[26053]: I0318 09:06:05.813113 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-546755554c-h5vql" podUID="7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Mar 18 09:06:06.792716 master-0 kubenswrapper[26053]: I0318 09:06:06.792667 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_ac3507630eeeca1ec26dca5ed036e3bb/kube-apiserver-cert-syncer/0.log" Mar 18 09:06:06.795280 master-0 kubenswrapper[26053]: E0318 09:06:06.795237 26053 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:06:07.233070 master-0 kubenswrapper[26053]: I0318 09:06:07.232941 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:06:07.234407 master-0 kubenswrapper[26053]: I0318 09:06:07.234336 26053 status_manager.go:851] "Failed to get status for pod" podUID="6030c175-df60-4af1-85b9-78a2cdc9f320" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:07.368891 master-0 kubenswrapper[26053]: I0318 09:06:07.368808 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6030c175-df60-4af1-85b9-78a2cdc9f320-kubelet-dir\") pod \"6030c175-df60-4af1-85b9-78a2cdc9f320\" (UID: \"6030c175-df60-4af1-85b9-78a2cdc9f320\") " Mar 18 09:06:07.369086 master-0 kubenswrapper[26053]: I0318 09:06:07.369008 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6030c175-df60-4af1-85b9-78a2cdc9f320-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6030c175-df60-4af1-85b9-78a2cdc9f320" (UID: "6030c175-df60-4af1-85b9-78a2cdc9f320"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:06:07.369125 master-0 kubenswrapper[26053]: I0318 09:06:07.369090 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6030c175-df60-4af1-85b9-78a2cdc9f320-kube-api-access\") pod \"6030c175-df60-4af1-85b9-78a2cdc9f320\" (UID: \"6030c175-df60-4af1-85b9-78a2cdc9f320\") " Mar 18 09:06:07.369156 master-0 kubenswrapper[26053]: I0318 09:06:07.369124 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6030c175-df60-4af1-85b9-78a2cdc9f320-var-lock\") pod \"6030c175-df60-4af1-85b9-78a2cdc9f320\" (UID: \"6030c175-df60-4af1-85b9-78a2cdc9f320\") " Mar 18 09:06:07.369442 master-0 kubenswrapper[26053]: I0318 09:06:07.369380 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6030c175-df60-4af1-85b9-78a2cdc9f320-var-lock" (OuterVolumeSpecName: "var-lock") pod "6030c175-df60-4af1-85b9-78a2cdc9f320" (UID: "6030c175-df60-4af1-85b9-78a2cdc9f320"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:06:07.369910 master-0 kubenswrapper[26053]: I0318 09:06:07.369864 26053 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6030c175-df60-4af1-85b9-78a2cdc9f320-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:07.369969 master-0 kubenswrapper[26053]: I0318 09:06:07.369914 26053 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6030c175-df60-4af1-85b9-78a2cdc9f320-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:07.372858 master-0 kubenswrapper[26053]: I0318 09:06:07.372806 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6030c175-df60-4af1-85b9-78a2cdc9f320-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6030c175-df60-4af1-85b9-78a2cdc9f320" (UID: "6030c175-df60-4af1-85b9-78a2cdc9f320"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:06:07.471454 master-0 kubenswrapper[26053]: I0318 09:06:07.471366 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6030c175-df60-4af1-85b9-78a2cdc9f320-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:07.728710 master-0 kubenswrapper[26053]: I0318 09:06:07.728417 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_ac3507630eeeca1ec26dca5ed036e3bb/kube-apiserver-cert-syncer/0.log" Mar 18 09:06:07.730192 master-0 kubenswrapper[26053]: I0318 09:06:07.730142 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:07.731523 master-0 kubenswrapper[26053]: I0318 09:06:07.731450 26053 status_manager.go:851] "Failed to get status for pod" podUID="6030c175-df60-4af1-85b9-78a2cdc9f320" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:07.732664 master-0 kubenswrapper[26053]: I0318 09:06:07.732546 26053 status_manager.go:851] "Failed to get status for pod" podUID="ac3507630eeeca1ec26dca5ed036e3bb" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:07.805135 master-0 kubenswrapper[26053]: I0318 09:06:07.805045 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"6030c175-df60-4af1-85b9-78a2cdc9f320","Type":"ContainerDied","Data":"688ad15aeb0ec37016da7e8e5e668193d1ced1e6bfcb85501404846b9c8bfdd8"} Mar 18 09:06:07.805135 master-0 kubenswrapper[26053]: I0318 09:06:07.805091 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 18 09:06:07.806048 master-0 kubenswrapper[26053]: I0318 09:06:07.805113 26053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="688ad15aeb0ec37016da7e8e5e668193d1ced1e6bfcb85501404846b9c8bfdd8" Mar 18 09:06:07.811337 master-0 kubenswrapper[26053]: I0318 09:06:07.811267 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_ac3507630eeeca1ec26dca5ed036e3bb/kube-apiserver-cert-syncer/0.log" Mar 18 09:06:07.812657 master-0 kubenswrapper[26053]: I0318 09:06:07.812533 26053 generic.go:334] "Generic (PLEG): container finished" podID="ac3507630eeeca1ec26dca5ed036e3bb" containerID="f7758a746eb691fed619094369d219bf54bedd1f477e550165239c0a34de5c0f" exitCode=0 Mar 18 09:06:07.812996 master-0 kubenswrapper[26053]: I0318 09:06:07.812658 26053 scope.go:117] "RemoveContainer" containerID="92d35a6a717c4777bb6b537ec9a6f0bb847acc1aa38b242c3d1cac7bc90efd44" Mar 18 09:06:07.812996 master-0 kubenswrapper[26053]: I0318 09:06:07.812688 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:07.838827 master-0 kubenswrapper[26053]: I0318 09:06:07.838720 26053 status_manager.go:851] "Failed to get status for pod" podUID="6030c175-df60-4af1-85b9-78a2cdc9f320" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:07.839703 master-0 kubenswrapper[26053]: I0318 09:06:07.839634 26053 status_manager.go:851] "Failed to get status for pod" podUID="ac3507630eeeca1ec26dca5ed036e3bb" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:07.843242 master-0 kubenswrapper[26053]: I0318 09:06:07.843117 26053 scope.go:117] "RemoveContainer" containerID="968ddd6b79e0d2d59e973bbf58446176d7d2eb085b2f0904548bb84e2c0df9ca" Mar 18 09:06:07.874058 master-0 kubenswrapper[26053]: I0318 09:06:07.873994 26053 scope.go:117] "RemoveContainer" containerID="907b93d88590538b76b9f3e15155463f097093d928671e9cdef117547df54cfc" Mar 18 09:06:07.880141 master-0 kubenswrapper[26053]: I0318 09:06:07.880083 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-resource-dir\") pod \"ac3507630eeeca1ec26dca5ed036e3bb\" (UID: \"ac3507630eeeca1ec26dca5ed036e3bb\") " Mar 18 09:06:07.880292 master-0 kubenswrapper[26053]: I0318 09:06:07.880176 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-cert-dir\") pod \"ac3507630eeeca1ec26dca5ed036e3bb\" (UID: \"ac3507630eeeca1ec26dca5ed036e3bb\") " Mar 18 09:06:07.880461 master-0 kubenswrapper[26053]: I0318 09:06:07.880296 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-audit-dir\") pod \"ac3507630eeeca1ec26dca5ed036e3bb\" (UID: \"ac3507630eeeca1ec26dca5ed036e3bb\") " Mar 18 09:06:07.882227 master-0 kubenswrapper[26053]: I0318 09:06:07.882170 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "ac3507630eeeca1ec26dca5ed036e3bb" (UID: "ac3507630eeeca1ec26dca5ed036e3bb"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:06:07.882400 master-0 kubenswrapper[26053]: I0318 09:06:07.882238 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "ac3507630eeeca1ec26dca5ed036e3bb" (UID: "ac3507630eeeca1ec26dca5ed036e3bb"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:06:07.882400 master-0 kubenswrapper[26053]: I0318 09:06:07.882287 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ac3507630eeeca1ec26dca5ed036e3bb" (UID: "ac3507630eeeca1ec26dca5ed036e3bb"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:06:07.915334 master-0 kubenswrapper[26053]: I0318 09:06:07.915262 26053 scope.go:117] "RemoveContainer" containerID="51c70143b4526577c0b7ded62575d5c728e020841c7047685596b1ab541784be" Mar 18 09:06:07.953811 master-0 kubenswrapper[26053]: I0318 09:06:07.952979 26053 scope.go:117] "RemoveContainer" containerID="f7758a746eb691fed619094369d219bf54bedd1f477e550165239c0a34de5c0f" Mar 18 09:06:07.979680 master-0 kubenswrapper[26053]: I0318 09:06:07.979637 26053 scope.go:117] "RemoveContainer" containerID="6334e1fed827ddd985e374f2cd49bc8670ca90ca11daa38cf82b7dd454965666" Mar 18 09:06:07.982334 master-0 kubenswrapper[26053]: I0318 09:06:07.982294 26053 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:07.982334 master-0 kubenswrapper[26053]: I0318 09:06:07.982330 26053 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:07.982466 master-0 kubenswrapper[26053]: I0318 09:06:07.982343 26053 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ac3507630eeeca1ec26dca5ed036e3bb-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:08.004650 master-0 kubenswrapper[26053]: I0318 09:06:08.004602 26053 scope.go:117] "RemoveContainer" containerID="92d35a6a717c4777bb6b537ec9a6f0bb847acc1aa38b242c3d1cac7bc90efd44" Mar 18 09:06:08.005657 master-0 kubenswrapper[26053]: E0318 09:06:08.005426 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92d35a6a717c4777bb6b537ec9a6f0bb847acc1aa38b242c3d1cac7bc90efd44\": container with ID starting with 92d35a6a717c4777bb6b537ec9a6f0bb847acc1aa38b242c3d1cac7bc90efd44 not found: ID does not exist" containerID="92d35a6a717c4777bb6b537ec9a6f0bb847acc1aa38b242c3d1cac7bc90efd44" Mar 18 09:06:08.005657 master-0 kubenswrapper[26053]: I0318 09:06:08.005461 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92d35a6a717c4777bb6b537ec9a6f0bb847acc1aa38b242c3d1cac7bc90efd44"} err="failed to get container status \"92d35a6a717c4777bb6b537ec9a6f0bb847acc1aa38b242c3d1cac7bc90efd44\": rpc error: code = NotFound desc = could not find container \"92d35a6a717c4777bb6b537ec9a6f0bb847acc1aa38b242c3d1cac7bc90efd44\": container with ID starting with 92d35a6a717c4777bb6b537ec9a6f0bb847acc1aa38b242c3d1cac7bc90efd44 not found: ID does not exist" Mar 18 09:06:08.005657 master-0 kubenswrapper[26053]: I0318 09:06:08.005505 26053 scope.go:117] "RemoveContainer" containerID="968ddd6b79e0d2d59e973bbf58446176d7d2eb085b2f0904548bb84e2c0df9ca" Mar 18 09:06:08.006335 master-0 kubenswrapper[26053]: E0318 09:06:08.006290 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"968ddd6b79e0d2d59e973bbf58446176d7d2eb085b2f0904548bb84e2c0df9ca\": container with ID starting with 968ddd6b79e0d2d59e973bbf58446176d7d2eb085b2f0904548bb84e2c0df9ca not found: ID does not exist" containerID="968ddd6b79e0d2d59e973bbf58446176d7d2eb085b2f0904548bb84e2c0df9ca" Mar 18 09:06:08.006688 master-0 kubenswrapper[26053]: I0318 09:06:08.006331 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"968ddd6b79e0d2d59e973bbf58446176d7d2eb085b2f0904548bb84e2c0df9ca"} err="failed to get container status \"968ddd6b79e0d2d59e973bbf58446176d7d2eb085b2f0904548bb84e2c0df9ca\": rpc error: code = NotFound desc = could not find container \"968ddd6b79e0d2d59e973bbf58446176d7d2eb085b2f0904548bb84e2c0df9ca\": container with ID starting with 968ddd6b79e0d2d59e973bbf58446176d7d2eb085b2f0904548bb84e2c0df9ca not found: ID does not exist" Mar 18 09:06:08.006688 master-0 kubenswrapper[26053]: I0318 09:06:08.006390 26053 scope.go:117] "RemoveContainer" containerID="907b93d88590538b76b9f3e15155463f097093d928671e9cdef117547df54cfc" Mar 18 09:06:08.007137 master-0 kubenswrapper[26053]: E0318 09:06:08.006934 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"907b93d88590538b76b9f3e15155463f097093d928671e9cdef117547df54cfc\": container with ID starting with 907b93d88590538b76b9f3e15155463f097093d928671e9cdef117547df54cfc not found: ID does not exist" containerID="907b93d88590538b76b9f3e15155463f097093d928671e9cdef117547df54cfc" Mar 18 09:06:08.007137 master-0 kubenswrapper[26053]: I0318 09:06:08.006966 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"907b93d88590538b76b9f3e15155463f097093d928671e9cdef117547df54cfc"} err="failed to get container status \"907b93d88590538b76b9f3e15155463f097093d928671e9cdef117547df54cfc\": rpc error: code = NotFound desc = could not find container \"907b93d88590538b76b9f3e15155463f097093d928671e9cdef117547df54cfc\": container with ID starting with 907b93d88590538b76b9f3e15155463f097093d928671e9cdef117547df54cfc not found: ID does not exist" Mar 18 09:06:08.007137 master-0 kubenswrapper[26053]: I0318 09:06:08.006980 26053 scope.go:117] "RemoveContainer" containerID="51c70143b4526577c0b7ded62575d5c728e020841c7047685596b1ab541784be" Mar 18 09:06:08.009178 master-0 kubenswrapper[26053]: E0318 09:06:08.008474 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51c70143b4526577c0b7ded62575d5c728e020841c7047685596b1ab541784be\": container with ID starting with 51c70143b4526577c0b7ded62575d5c728e020841c7047685596b1ab541784be not found: ID does not exist" containerID="51c70143b4526577c0b7ded62575d5c728e020841c7047685596b1ab541784be" Mar 18 09:06:08.009178 master-0 kubenswrapper[26053]: I0318 09:06:08.008598 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51c70143b4526577c0b7ded62575d5c728e020841c7047685596b1ab541784be"} err="failed to get container status \"51c70143b4526577c0b7ded62575d5c728e020841c7047685596b1ab541784be\": rpc error: code = NotFound desc = could not find container \"51c70143b4526577c0b7ded62575d5c728e020841c7047685596b1ab541784be\": container with ID starting with 51c70143b4526577c0b7ded62575d5c728e020841c7047685596b1ab541784be not found: ID does not exist" Mar 18 09:06:08.009178 master-0 kubenswrapper[26053]: I0318 09:06:08.008636 26053 scope.go:117] "RemoveContainer" containerID="f7758a746eb691fed619094369d219bf54bedd1f477e550165239c0a34de5c0f" Mar 18 09:06:08.009522 master-0 kubenswrapper[26053]: E0318 09:06:08.009429 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7758a746eb691fed619094369d219bf54bedd1f477e550165239c0a34de5c0f\": container with ID starting with f7758a746eb691fed619094369d219bf54bedd1f477e550165239c0a34de5c0f not found: ID does not exist" containerID="f7758a746eb691fed619094369d219bf54bedd1f477e550165239c0a34de5c0f" Mar 18 09:06:08.009522 master-0 kubenswrapper[26053]: I0318 09:06:08.009475 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7758a746eb691fed619094369d219bf54bedd1f477e550165239c0a34de5c0f"} err="failed to get container status \"f7758a746eb691fed619094369d219bf54bedd1f477e550165239c0a34de5c0f\": rpc error: code = NotFound desc = could not find container \"f7758a746eb691fed619094369d219bf54bedd1f477e550165239c0a34de5c0f\": container with ID starting with f7758a746eb691fed619094369d219bf54bedd1f477e550165239c0a34de5c0f not found: ID does not exist" Mar 18 09:06:08.009522 master-0 kubenswrapper[26053]: I0318 09:06:08.009504 26053 scope.go:117] "RemoveContainer" containerID="6334e1fed827ddd985e374f2cd49bc8670ca90ca11daa38cf82b7dd454965666" Mar 18 09:06:08.010753 master-0 kubenswrapper[26053]: E0318 09:06:08.010683 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6334e1fed827ddd985e374f2cd49bc8670ca90ca11daa38cf82b7dd454965666\": container with ID starting with 6334e1fed827ddd985e374f2cd49bc8670ca90ca11daa38cf82b7dd454965666 not found: ID does not exist" containerID="6334e1fed827ddd985e374f2cd49bc8670ca90ca11daa38cf82b7dd454965666" Mar 18 09:06:08.011298 master-0 kubenswrapper[26053]: I0318 09:06:08.010756 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6334e1fed827ddd985e374f2cd49bc8670ca90ca11daa38cf82b7dd454965666"} err="failed to get container status \"6334e1fed827ddd985e374f2cd49bc8670ca90ca11daa38cf82b7dd454965666\": rpc error: code = NotFound desc = could not find container \"6334e1fed827ddd985e374f2cd49bc8670ca90ca11daa38cf82b7dd454965666\": container with ID starting with 6334e1fed827ddd985e374f2cd49bc8670ca90ca11daa38cf82b7dd454965666 not found: ID does not exist" Mar 18 09:06:08.131734 master-0 kubenswrapper[26053]: I0318 09:06:08.131677 26053 status_manager.go:851] "Failed to get status for pod" podUID="6030c175-df60-4af1-85b9-78a2cdc9f320" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:08.132142 master-0 kubenswrapper[26053]: I0318 09:06:08.132093 26053 status_manager.go:851] "Failed to get status for pod" podUID="ac3507630eeeca1ec26dca5ed036e3bb" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:08.746189 master-0 kubenswrapper[26053]: I0318 09:06:08.746103 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac3507630eeeca1ec26dca5ed036e3bb" path="/var/lib/kubelet/pods/ac3507630eeeca1ec26dca5ed036e3bb/volumes" Mar 18 09:06:09.276155 master-0 kubenswrapper[26053]: E0318 09:06:09.275948 26053 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189de439d97c8fda openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:16fb4ea7f83036d9c6adf3454fc7e9db,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:06:05.366931418 +0000 UTC m=+152.860282839,LastTimestamp:2026-03-18 09:06:05.366931418 +0000 UTC m=+152.860282839,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:06:10.278685 master-0 kubenswrapper[26053]: I0318 09:06:10.278610 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:06:10.278685 master-0 kubenswrapper[26053]: I0318 09:06:10.278669 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:06:10.281600 master-0 kubenswrapper[26053]: I0318 09:06:10.281506 26053 patch_prober.go:28] interesting pod/console-7748c6b99d-fkjm5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 18 09:06:10.281753 master-0 kubenswrapper[26053]: I0318 09:06:10.281617 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7748c6b99d-fkjm5" podUID="6a611129-8d70-4618-8512-8e0a3491353e" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 18 09:06:12.739448 master-0 kubenswrapper[26053]: I0318 09:06:12.739265 26053 status_manager.go:851] "Failed to get status for pod" podUID="6030c175-df60-4af1-85b9-78a2cdc9f320" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:13.900036 master-0 kubenswrapper[26053]: I0318 09:06:13.899875 26053 generic.go:334] "Generic (PLEG): container finished" podID="87381a51-96e6-4e86-bdae-c8ac3fc7a039" containerID="81a151a3aa12b152f9071a9f499fc6c53ed0410a76702e645d7cd7db06bbf80b" exitCode=0 Mar 18 09:06:13.900036 master-0 kubenswrapper[26053]: I0318 09:06:13.899933 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" event={"ID":"87381a51-96e6-4e86-bdae-c8ac3fc7a039","Type":"ContainerDied","Data":"81a151a3aa12b152f9071a9f499fc6c53ed0410a76702e645d7cd7db06bbf80b"} Mar 18 09:06:14.104041 master-0 kubenswrapper[26053]: I0318 09:06:14.103996 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:06:14.105301 master-0 kubenswrapper[26053]: I0318 09:06:14.105160 26053 status_manager.go:851] "Failed to get status for pod" podUID="87381a51-96e6-4e86-bdae-c8ac3fc7a039" pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/metrics-server-7875f64c8-kmr8t\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:14.108894 master-0 kubenswrapper[26053]: I0318 09:06:14.107156 26053 status_manager.go:851] "Failed to get status for pod" podUID="6030c175-df60-4af1-85b9-78a2cdc9f320" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:14.296052 master-0 kubenswrapper[26053]: I0318 09:06:14.295471 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brzfx\" (UniqueName: \"kubernetes.io/projected/87381a51-96e6-4e86-bdae-c8ac3fc7a039-kube-api-access-brzfx\") pod \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " Mar 18 09:06:14.296052 master-0 kubenswrapper[26053]: I0318 09:06:14.296050 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/87381a51-96e6-4e86-bdae-c8ac3fc7a039-audit-log\") pod \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " Mar 18 09:06:14.296508 master-0 kubenswrapper[26053]: I0318 09:06:14.296082 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-client-certs\") pod \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " Mar 18 09:06:14.296508 master-0 kubenswrapper[26053]: I0318 09:06:14.296146 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-server-tls\") pod \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " Mar 18 09:06:14.296508 master-0 kubenswrapper[26053]: I0318 09:06:14.296186 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-metrics-server-audit-profiles\") pod \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " Mar 18 09:06:14.296508 master-0 kubenswrapper[26053]: I0318 09:06:14.296286 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-configmap-kubelet-serving-ca-bundle\") pod \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " Mar 18 09:06:14.296508 master-0 kubenswrapper[26053]: I0318 09:06:14.296357 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-client-ca-bundle\") pod \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\" (UID: \"87381a51-96e6-4e86-bdae-c8ac3fc7a039\") " Mar 18 09:06:14.297119 master-0 kubenswrapper[26053]: I0318 09:06:14.297040 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87381a51-96e6-4e86-bdae-c8ac3fc7a039-audit-log" (OuterVolumeSpecName: "audit-log") pod "87381a51-96e6-4e86-bdae-c8ac3fc7a039" (UID: "87381a51-96e6-4e86-bdae-c8ac3fc7a039"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 18 09:06:14.297831 master-0 kubenswrapper[26053]: I0318 09:06:14.297767 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "87381a51-96e6-4e86-bdae-c8ac3fc7a039" (UID: "87381a51-96e6-4e86-bdae-c8ac3fc7a039"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:06:14.298127 master-0 kubenswrapper[26053]: I0318 09:06:14.298040 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "87381a51-96e6-4e86-bdae-c8ac3fc7a039" (UID: "87381a51-96e6-4e86-bdae-c8ac3fc7a039"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:06:14.398394 master-0 kubenswrapper[26053]: I0318 09:06:14.398332 26053 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/87381a51-96e6-4e86-bdae-c8ac3fc7a039-audit-log\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:14.398394 master-0 kubenswrapper[26053]: I0318 09:06:14.398385 26053 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:14.398684 master-0 kubenswrapper[26053]: I0318 09:06:14.398408 26053 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87381a51-96e6-4e86-bdae-c8ac3fc7a039-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:14.613086 master-0 kubenswrapper[26053]: E0318 09:06:14.612907 26053 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:14.613747 master-0 kubenswrapper[26053]: E0318 09:06:14.613695 26053 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:14.614417 master-0 kubenswrapper[26053]: E0318 09:06:14.614363 26053 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:14.615329 master-0 kubenswrapper[26053]: E0318 09:06:14.615230 26053 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:14.616325 master-0 kubenswrapper[26053]: E0318 09:06:14.616230 26053 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:14.616325 master-0 kubenswrapper[26053]: I0318 09:06:14.616282 26053 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 18 09:06:14.617066 master-0 kubenswrapper[26053]: E0318 09:06:14.617011 26053 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 18 09:06:14.666391 master-0 kubenswrapper[26053]: I0318 09:06:14.666259 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87381a51-96e6-4e86-bdae-c8ac3fc7a039-kube-api-access-brzfx" (OuterVolumeSpecName: "kube-api-access-brzfx") pod "87381a51-96e6-4e86-bdae-c8ac3fc7a039" (UID: "87381a51-96e6-4e86-bdae-c8ac3fc7a039"). InnerVolumeSpecName "kube-api-access-brzfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:06:14.666391 master-0 kubenswrapper[26053]: I0318 09:06:14.666349 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "87381a51-96e6-4e86-bdae-c8ac3fc7a039" (UID: "87381a51-96e6-4e86-bdae-c8ac3fc7a039"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:06:14.666834 master-0 kubenswrapper[26053]: I0318 09:06:14.666726 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "87381a51-96e6-4e86-bdae-c8ac3fc7a039" (UID: "87381a51-96e6-4e86-bdae-c8ac3fc7a039"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:06:14.666834 master-0 kubenswrapper[26053]: I0318 09:06:14.666816 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "87381a51-96e6-4e86-bdae-c8ac3fc7a039" (UID: "87381a51-96e6-4e86-bdae-c8ac3fc7a039"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:06:14.702290 master-0 kubenswrapper[26053]: I0318 09:06:14.702221 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brzfx\" (UniqueName: \"kubernetes.io/projected/87381a51-96e6-4e86-bdae-c8ac3fc7a039-kube-api-access-brzfx\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:14.702290 master-0 kubenswrapper[26053]: I0318 09:06:14.702265 26053 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:14.702290 master-0 kubenswrapper[26053]: I0318 09:06:14.702280 26053 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:14.702290 master-0 kubenswrapper[26053]: I0318 09:06:14.702293 26053 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87381a51-96e6-4e86-bdae-c8ac3fc7a039-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:14.818964 master-0 kubenswrapper[26053]: E0318 09:06:14.818861 26053 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 18 09:06:14.917539 master-0 kubenswrapper[26053]: I0318 09:06:14.917370 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" event={"ID":"87381a51-96e6-4e86-bdae-c8ac3fc7a039","Type":"ContainerDied","Data":"c44219a166b17d244712a88198d9aa0d215e4b99c0debae8766fc702ed86eb66"} Mar 18 09:06:14.917539 master-0 kubenswrapper[26053]: I0318 09:06:14.917468 26053 scope.go:117] "RemoveContainer" containerID="81a151a3aa12b152f9071a9f499fc6c53ed0410a76702e645d7cd7db06bbf80b" Mar 18 09:06:14.918524 master-0 kubenswrapper[26053]: I0318 09:06:14.917559 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" Mar 18 09:06:14.919475 master-0 kubenswrapper[26053]: I0318 09:06:14.919361 26053 status_manager.go:851] "Failed to get status for pod" podUID="87381a51-96e6-4e86-bdae-c8ac3fc7a039" pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/metrics-server-7875f64c8-kmr8t\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:14.920807 master-0 kubenswrapper[26053]: I0318 09:06:14.920714 26053 status_manager.go:851] "Failed to get status for pod" podUID="6030c175-df60-4af1-85b9-78a2cdc9f320" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:14.925678 master-0 kubenswrapper[26053]: I0318 09:06:14.925623 26053 status_manager.go:851] "Failed to get status for pod" podUID="87381a51-96e6-4e86-bdae-c8ac3fc7a039" pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/metrics-server-7875f64c8-kmr8t\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:14.926600 master-0 kubenswrapper[26053]: I0318 09:06:14.926489 26053 status_manager.go:851] "Failed to get status for pod" podUID="6030c175-df60-4af1-85b9-78a2cdc9f320" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:15.220378 master-0 kubenswrapper[26053]: E0318 09:06:15.220276 26053 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 18 09:06:15.729991 master-0 kubenswrapper[26053]: I0318 09:06:15.729924 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:15.731898 master-0 kubenswrapper[26053]: I0318 09:06:15.731814 26053 status_manager.go:851] "Failed to get status for pod" podUID="87381a51-96e6-4e86-bdae-c8ac3fc7a039" pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/metrics-server-7875f64c8-kmr8t\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:15.733026 master-0 kubenswrapper[26053]: I0318 09:06:15.732945 26053 status_manager.go:851] "Failed to get status for pod" podUID="6030c175-df60-4af1-85b9-78a2cdc9f320" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:15.773053 master-0 kubenswrapper[26053]: I0318 09:06:15.772975 26053 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="fc00bfac-d1c1-44b0-885a-62904f888a99" Mar 18 09:06:15.773053 master-0 kubenswrapper[26053]: I0318 09:06:15.773035 26053 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="fc00bfac-d1c1-44b0-885a-62904f888a99" Mar 18 09:06:15.774105 master-0 kubenswrapper[26053]: E0318 09:06:15.773978 26053 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:15.774803 master-0 kubenswrapper[26053]: I0318 09:06:15.774755 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:15.798514 master-0 kubenswrapper[26053]: W0318 09:06:15.798439 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d5ce05b3d592e63f1f92202d52b9635.slice/crio-7615d19d60b996c1d40006087ef59f57760da8d229027c24587fdc33b26e8d75 WatchSource:0}: Error finding container 7615d19d60b996c1d40006087ef59f57760da8d229027c24587fdc33b26e8d75: Status 404 returned error can't find the container with id 7615d19d60b996c1d40006087ef59f57760da8d229027c24587fdc33b26e8d75 Mar 18 09:06:15.812624 master-0 kubenswrapper[26053]: I0318 09:06:15.812543 26053 patch_prober.go:28] interesting pod/console-546755554c-h5vql container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Mar 18 09:06:15.812801 master-0 kubenswrapper[26053]: I0318 09:06:15.812616 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-546755554c-h5vql" podUID="7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Mar 18 09:06:15.933358 master-0 kubenswrapper[26053]: I0318 09:06:15.933291 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"7615d19d60b996c1d40006087ef59f57760da8d229027c24587fdc33b26e8d75"} Mar 18 09:06:16.022197 master-0 kubenswrapper[26053]: E0318 09:06:16.022139 26053 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 18 09:06:16.942533 master-0 kubenswrapper[26053]: I0318 09:06:16.942473 26053 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="19bd526f5db2d8b53fc478a4d8138f1698660b5597c3be00125c76e7346152f0" exitCode=0 Mar 18 09:06:16.943398 master-0 kubenswrapper[26053]: I0318 09:06:16.942546 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerDied","Data":"19bd526f5db2d8b53fc478a4d8138f1698660b5597c3be00125c76e7346152f0"} Mar 18 09:06:16.943643 master-0 kubenswrapper[26053]: I0318 09:06:16.943104 26053 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="fc00bfac-d1c1-44b0-885a-62904f888a99" Mar 18 09:06:16.943842 master-0 kubenswrapper[26053]: I0318 09:06:16.943813 26053 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="fc00bfac-d1c1-44b0-885a-62904f888a99" Mar 18 09:06:16.944050 master-0 kubenswrapper[26053]: I0318 09:06:16.943970 26053 status_manager.go:851] "Failed to get status for pod" podUID="87381a51-96e6-4e86-bdae-c8ac3fc7a039" pod="openshift-monitoring/metrics-server-7875f64c8-kmr8t" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/metrics-server-7875f64c8-kmr8t\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:16.945293 master-0 kubenswrapper[26053]: E0318 09:06:16.945224 26053 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:16.945434 master-0 kubenswrapper[26053]: I0318 09:06:16.945347 26053 status_manager.go:851] "Failed to get status for pod" podUID="6030c175-df60-4af1-85b9-78a2cdc9f320" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:06:17.957575 master-0 kubenswrapper[26053]: I0318 09:06:17.957495 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"2bffac0d3737efa9fd4e382b7d3bdc3841a1ab5fa52c8358b5630ed2ae18bb38"} Mar 18 09:06:17.958188 master-0 kubenswrapper[26053]: I0318 09:06:17.957660 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"d86be09cf0053c9e4298b054fc85c000e30ef5a3cb4ffb7f6bdab8fd73fc3e18"} Mar 18 09:06:17.958188 master-0 kubenswrapper[26053]: I0318 09:06:17.957689 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"bbf23a50550d9252ce1ebd70cefec9cc9a541dd37fc9b4ca4e7be5f197c8b9a4"} Mar 18 09:06:18.965905 master-0 kubenswrapper[26053]: I0318 09:06:18.965852 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"dcee15c91521d767d4ea40bb222a0577ed62eb70a724af613b28117055e47b7a"} Mar 18 09:06:18.965905 master-0 kubenswrapper[26053]: I0318 09:06:18.965897 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"7d5ce05b3d592e63f1f92202d52b9635","Type":"ContainerStarted","Data":"b27fd72d2df6aa8485ccfdccefee7e08421525c7ce41e315130d4dbf81b08424"} Mar 18 09:06:18.966620 master-0 kubenswrapper[26053]: I0318 09:06:18.966053 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:18.966620 master-0 kubenswrapper[26053]: I0318 09:06:18.966175 26053 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="fc00bfac-d1c1-44b0-885a-62904f888a99" Mar 18 09:06:18.966620 master-0 kubenswrapper[26053]: I0318 09:06:18.966207 26053 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="fc00bfac-d1c1-44b0-885a-62904f888a99" Mar 18 09:06:19.723418 master-0 kubenswrapper[26053]: I0318 09:06:19.723332 26053 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 09:06:19.723796 master-0 kubenswrapper[26053]: I0318 09:06:19.723442 26053 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="2902db65fe16fd26bf5e57c38292ff3f" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 09:06:19.723796 master-0 kubenswrapper[26053]: I0318 09:06:19.723342 26053 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 18 09:06:19.723796 master-0 kubenswrapper[26053]: I0318 09:06:19.723666 26053 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="2902db65fe16fd26bf5e57c38292ff3f" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 18 09:06:19.973625 master-0 kubenswrapper[26053]: I0318 09:06:19.973483 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_2902db65fe16fd26bf5e57c38292ff3f/kube-controller-manager/0.log" Mar 18 09:06:19.973625 master-0 kubenswrapper[26053]: I0318 09:06:19.973526 26053 generic.go:334] "Generic (PLEG): container finished" podID="2902db65fe16fd26bf5e57c38292ff3f" containerID="c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc" exitCode=1 Mar 18 09:06:19.973625 master-0 kubenswrapper[26053]: I0318 09:06:19.973552 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"2902db65fe16fd26bf5e57c38292ff3f","Type":"ContainerDied","Data":"c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc"} Mar 18 09:06:19.974325 master-0 kubenswrapper[26053]: I0318 09:06:19.973950 26053 scope.go:117] "RemoveContainer" containerID="c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc" Mar 18 09:06:20.279356 master-0 kubenswrapper[26053]: I0318 09:06:20.279317 26053 patch_prober.go:28] interesting pod/console-7748c6b99d-fkjm5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 18 09:06:20.279674 master-0 kubenswrapper[26053]: I0318 09:06:20.279620 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7748c6b99d-fkjm5" podUID="6a611129-8d70-4618-8512-8e0a3491353e" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 18 09:06:20.775737 master-0 kubenswrapper[26053]: I0318 09:06:20.775666 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:20.776037 master-0 kubenswrapper[26053]: I0318 09:06:20.775926 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:20.789675 master-0 kubenswrapper[26053]: I0318 09:06:20.789603 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:20.990351 master-0 kubenswrapper[26053]: I0318 09:06:20.990286 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_2902db65fe16fd26bf5e57c38292ff3f/kube-controller-manager/0.log" Mar 18 09:06:20.991148 master-0 kubenswrapper[26053]: I0318 09:06:20.990382 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"2902db65fe16fd26bf5e57c38292ff3f","Type":"ContainerStarted","Data":"711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43"} Mar 18 09:06:24.033955 master-0 kubenswrapper[26053]: I0318 09:06:24.033894 26053 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:24.182960 master-0 kubenswrapper[26053]: I0318 09:06:24.182908 26053 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="7d5ce05b3d592e63f1f92202d52b9635" podUID="7645e18a-f03c-4c7c-8b69-ba6ccc8743f2" Mar 18 09:06:24.911834 master-0 kubenswrapper[26053]: I0318 09:06:24.911751 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-68c5849c7c-lqm2r" podUID="32425206-41b7-427e-8773-f650801d9d76" containerName="console" containerID="cri-o://c1393693cfa1bf6a1832a3ae800f7a13ec17094ab451357052db3f864a6739ba" gracePeriod=15 Mar 18 09:06:25.025235 master-0 kubenswrapper[26053]: I0318 09:06:25.025127 26053 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="fc00bfac-d1c1-44b0-885a-62904f888a99" Mar 18 09:06:25.025235 master-0 kubenswrapper[26053]: I0318 09:06:25.025184 26053 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="fc00bfac-d1c1-44b0-885a-62904f888a99" Mar 18 09:06:25.031250 master-0 kubenswrapper[26053]: I0318 09:06:25.031167 26053 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="7d5ce05b3d592e63f1f92202d52b9635" podUID="7645e18a-f03c-4c7c-8b69-ba6ccc8743f2" Mar 18 09:06:25.033909 master-0 kubenswrapper[26053]: I0318 09:06:25.033375 26053 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-master-0" containerID="cri-o://bbf23a50550d9252ce1ebd70cefec9cc9a541dd37fc9b4ca4e7be5f197c8b9a4" Mar 18 09:06:25.033909 master-0 kubenswrapper[26053]: I0318 09:06:25.033438 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:25.477260 master-0 kubenswrapper[26053]: I0318 09:06:25.477198 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-68c5849c7c-lqm2r_32425206-41b7-427e-8773-f650801d9d76/console/0.log" Mar 18 09:06:25.477496 master-0 kubenswrapper[26053]: I0318 09:06:25.477289 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68c5849c7c-lqm2r" Mar 18 09:06:25.589753 master-0 kubenswrapper[26053]: I0318 09:06:25.589712 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32425206-41b7-427e-8773-f650801d9d76-service-ca\") pod \"32425206-41b7-427e-8773-f650801d9d76\" (UID: \"32425206-41b7-427e-8773-f650801d9d76\") " Mar 18 09:06:25.589986 master-0 kubenswrapper[26053]: I0318 09:06:25.589972 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32425206-41b7-427e-8773-f650801d9d76-oauth-serving-cert\") pod \"32425206-41b7-427e-8773-f650801d9d76\" (UID: \"32425206-41b7-427e-8773-f650801d9d76\") " Mar 18 09:06:25.590107 master-0 kubenswrapper[26053]: I0318 09:06:25.590094 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32425206-41b7-427e-8773-f650801d9d76-console-config\") pod \"32425206-41b7-427e-8773-f650801d9d76\" (UID: \"32425206-41b7-427e-8773-f650801d9d76\") " Mar 18 09:06:25.590248 master-0 kubenswrapper[26053]: I0318 09:06:25.590231 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32425206-41b7-427e-8773-f650801d9d76-console-serving-cert\") pod \"32425206-41b7-427e-8773-f650801d9d76\" (UID: \"32425206-41b7-427e-8773-f650801d9d76\") " Mar 18 09:06:25.590355 master-0 kubenswrapper[26053]: I0318 09:06:25.590337 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32425206-41b7-427e-8773-f650801d9d76-console-oauth-config\") pod \"32425206-41b7-427e-8773-f650801d9d76\" (UID: \"32425206-41b7-427e-8773-f650801d9d76\") " Mar 18 09:06:25.590501 master-0 kubenswrapper[26053]: I0318 09:06:25.590482 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2klpq\" (UniqueName: \"kubernetes.io/projected/32425206-41b7-427e-8773-f650801d9d76-kube-api-access-2klpq\") pod \"32425206-41b7-427e-8773-f650801d9d76\" (UID: \"32425206-41b7-427e-8773-f650801d9d76\") " Mar 18 09:06:25.590704 master-0 kubenswrapper[26053]: I0318 09:06:25.590358 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32425206-41b7-427e-8773-f650801d9d76-service-ca" (OuterVolumeSpecName: "service-ca") pod "32425206-41b7-427e-8773-f650801d9d76" (UID: "32425206-41b7-427e-8773-f650801d9d76"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:06:25.590929 master-0 kubenswrapper[26053]: I0318 09:06:25.590865 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32425206-41b7-427e-8773-f650801d9d76-console-config" (OuterVolumeSpecName: "console-config") pod "32425206-41b7-427e-8773-f650801d9d76" (UID: "32425206-41b7-427e-8773-f650801d9d76"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:06:25.591134 master-0 kubenswrapper[26053]: I0318 09:06:25.591112 26053 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32425206-41b7-427e-8773-f650801d9d76-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:25.591231 master-0 kubenswrapper[26053]: I0318 09:06:25.591216 26053 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32425206-41b7-427e-8773-f650801d9d76-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:25.591318 master-0 kubenswrapper[26053]: I0318 09:06:25.591176 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32425206-41b7-427e-8773-f650801d9d76-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "32425206-41b7-427e-8773-f650801d9d76" (UID: "32425206-41b7-427e-8773-f650801d9d76"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:06:25.594187 master-0 kubenswrapper[26053]: I0318 09:06:25.594135 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32425206-41b7-427e-8773-f650801d9d76-kube-api-access-2klpq" (OuterVolumeSpecName: "kube-api-access-2klpq") pod "32425206-41b7-427e-8773-f650801d9d76" (UID: "32425206-41b7-427e-8773-f650801d9d76"). InnerVolumeSpecName "kube-api-access-2klpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:06:25.594401 master-0 kubenswrapper[26053]: I0318 09:06:25.594362 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32425206-41b7-427e-8773-f650801d9d76-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "32425206-41b7-427e-8773-f650801d9d76" (UID: "32425206-41b7-427e-8773-f650801d9d76"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:06:25.595073 master-0 kubenswrapper[26053]: I0318 09:06:25.595039 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32425206-41b7-427e-8773-f650801d9d76-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "32425206-41b7-427e-8773-f650801d9d76" (UID: "32425206-41b7-427e-8773-f650801d9d76"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:06:25.693905 master-0 kubenswrapper[26053]: I0318 09:06:25.693792 26053 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32425206-41b7-427e-8773-f650801d9d76-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:25.693905 master-0 kubenswrapper[26053]: I0318 09:06:25.693848 26053 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32425206-41b7-427e-8773-f650801d9d76-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:25.693905 master-0 kubenswrapper[26053]: I0318 09:06:25.693868 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2klpq\" (UniqueName: \"kubernetes.io/projected/32425206-41b7-427e-8773-f650801d9d76-kube-api-access-2klpq\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:25.693905 master-0 kubenswrapper[26053]: I0318 09:06:25.693887 26053 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32425206-41b7-427e-8773-f650801d9d76-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:25.813368 master-0 kubenswrapper[26053]: I0318 09:06:25.813272 26053 patch_prober.go:28] interesting pod/console-546755554c-h5vql container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Mar 18 09:06:25.813639 master-0 kubenswrapper[26053]: I0318 09:06:25.813380 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-546755554c-h5vql" podUID="7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Mar 18 09:06:26.035928 master-0 kubenswrapper[26053]: I0318 09:06:26.035258 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-68c5849c7c-lqm2r_32425206-41b7-427e-8773-f650801d9d76/console/0.log" Mar 18 09:06:26.035928 master-0 kubenswrapper[26053]: I0318 09:06:26.035324 26053 generic.go:334] "Generic (PLEG): container finished" podID="32425206-41b7-427e-8773-f650801d9d76" containerID="c1393693cfa1bf6a1832a3ae800f7a13ec17094ab451357052db3f864a6739ba" exitCode=2 Mar 18 09:06:26.035928 master-0 kubenswrapper[26053]: I0318 09:06:26.035380 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68c5849c7c-lqm2r" event={"ID":"32425206-41b7-427e-8773-f650801d9d76","Type":"ContainerDied","Data":"c1393693cfa1bf6a1832a3ae800f7a13ec17094ab451357052db3f864a6739ba"} Mar 18 09:06:26.035928 master-0 kubenswrapper[26053]: I0318 09:06:26.035418 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68c5849c7c-lqm2r" Mar 18 09:06:26.035928 master-0 kubenswrapper[26053]: I0318 09:06:26.035452 26053 scope.go:117] "RemoveContainer" containerID="c1393693cfa1bf6a1832a3ae800f7a13ec17094ab451357052db3f864a6739ba" Mar 18 09:06:26.035928 master-0 kubenswrapper[26053]: I0318 09:06:26.035439 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68c5849c7c-lqm2r" event={"ID":"32425206-41b7-427e-8773-f650801d9d76","Type":"ContainerDied","Data":"94154839b8010dc6af1ce4ababf21622beef8733429a4c5de63c874606a0f08f"} Mar 18 09:06:26.035928 master-0 kubenswrapper[26053]: I0318 09:06:26.035647 26053 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="fc00bfac-d1c1-44b0-885a-62904f888a99" Mar 18 09:06:26.035928 master-0 kubenswrapper[26053]: I0318 09:06:26.035664 26053 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="fc00bfac-d1c1-44b0-885a-62904f888a99" Mar 18 09:06:26.039276 master-0 kubenswrapper[26053]: I0318 09:06:26.039149 26053 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="7d5ce05b3d592e63f1f92202d52b9635" podUID="7645e18a-f03c-4c7c-8b69-ba6ccc8743f2" Mar 18 09:06:26.059716 master-0 kubenswrapper[26053]: I0318 09:06:26.059654 26053 scope.go:117] "RemoveContainer" containerID="c1393693cfa1bf6a1832a3ae800f7a13ec17094ab451357052db3f864a6739ba" Mar 18 09:06:26.060470 master-0 kubenswrapper[26053]: E0318 09:06:26.060431 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1393693cfa1bf6a1832a3ae800f7a13ec17094ab451357052db3f864a6739ba\": container with ID starting with c1393693cfa1bf6a1832a3ae800f7a13ec17094ab451357052db3f864a6739ba not found: ID does not exist" containerID="c1393693cfa1bf6a1832a3ae800f7a13ec17094ab451357052db3f864a6739ba" Mar 18 09:06:26.060541 master-0 kubenswrapper[26053]: I0318 09:06:26.060471 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1393693cfa1bf6a1832a3ae800f7a13ec17094ab451357052db3f864a6739ba"} err="failed to get container status \"c1393693cfa1bf6a1832a3ae800f7a13ec17094ab451357052db3f864a6739ba\": rpc error: code = NotFound desc = could not find container \"c1393693cfa1bf6a1832a3ae800f7a13ec17094ab451357052db3f864a6739ba\": container with ID starting with c1393693cfa1bf6a1832a3ae800f7a13ec17094ab451357052db3f864a6739ba not found: ID does not exist" Mar 18 09:06:29.722504 master-0 kubenswrapper[26053]: I0318 09:06:29.722369 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:06:29.722504 master-0 kubenswrapper[26053]: I0318 09:06:29.722489 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:06:29.732157 master-0 kubenswrapper[26053]: I0318 09:06:29.732085 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:06:30.083300 master-0 kubenswrapper[26053]: I0318 09:06:30.083155 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:06:30.279385 master-0 kubenswrapper[26053]: I0318 09:06:30.279270 26053 patch_prober.go:28] interesting pod/console-7748c6b99d-fkjm5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 18 09:06:30.279719 master-0 kubenswrapper[26053]: I0318 09:06:30.279394 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7748c6b99d-fkjm5" podUID="6a611129-8d70-4618-8512-8e0a3491353e" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 18 09:06:30.504128 master-0 kubenswrapper[26053]: I0318 09:06:30.504054 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 09:06:32.923837 master-0 kubenswrapper[26053]: I0318 09:06:32.923717 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 18 09:06:33.501373 master-0 kubenswrapper[26053]: I0318 09:06:33.501316 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 18 09:06:33.799682 master-0 kubenswrapper[26053]: I0318 09:06:33.799411 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 09:06:33.935269 master-0 kubenswrapper[26053]: I0318 09:06:33.935205 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 09:06:34.139498 master-0 kubenswrapper[26053]: I0318 09:06:34.139298 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 09:06:34.463774 master-0 kubenswrapper[26053]: I0318 09:06:34.463734 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-l7k6v" Mar 18 09:06:34.538650 master-0 kubenswrapper[26053]: I0318 09:06:34.538538 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 09:06:35.329658 master-0 kubenswrapper[26053]: I0318 09:06:35.329594 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 09:06:35.351152 master-0 kubenswrapper[26053]: I0318 09:06:35.351071 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 09:06:35.376554 master-0 kubenswrapper[26053]: I0318 09:06:35.376425 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 09:06:35.812961 master-0 kubenswrapper[26053]: I0318 09:06:35.812867 26053 patch_prober.go:28] interesting pod/console-546755554c-h5vql container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" start-of-body= Mar 18 09:06:35.813285 master-0 kubenswrapper[26053]: I0318 09:06:35.812958 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-546755554c-h5vql" podUID="7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: connect: connection refused" Mar 18 09:06:36.059488 master-0 kubenswrapper[26053]: I0318 09:06:36.059420 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 18 09:06:36.183724 master-0 kubenswrapper[26053]: I0318 09:06:36.183601 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 09:06:36.330815 master-0 kubenswrapper[26053]: I0318 09:06:36.330738 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 09:06:36.548004 master-0 kubenswrapper[26053]: I0318 09:06:36.547916 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 18 09:06:36.647750 master-0 kubenswrapper[26053]: I0318 09:06:36.647687 26053 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 09:06:36.931096 master-0 kubenswrapper[26053]: I0318 09:06:36.930897 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-kg24z" Mar 18 09:06:36.947408 master-0 kubenswrapper[26053]: I0318 09:06:36.947315 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 09:06:36.992490 master-0 kubenswrapper[26053]: I0318 09:06:36.992421 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 09:06:37.186488 master-0 kubenswrapper[26053]: I0318 09:06:37.186315 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 09:06:37.193749 master-0 kubenswrapper[26053]: I0318 09:06:37.193682 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 09:06:37.281328 master-0 kubenswrapper[26053]: I0318 09:06:37.281236 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 18 09:06:37.360469 master-0 kubenswrapper[26053]: I0318 09:06:37.360390 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 09:06:37.427261 master-0 kubenswrapper[26053]: I0318 09:06:37.427189 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 09:06:37.680077 master-0 kubenswrapper[26053]: I0318 09:06:37.680004 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 09:06:38.020397 master-0 kubenswrapper[26053]: I0318 09:06:38.020346 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 09:06:38.038828 master-0 kubenswrapper[26053]: I0318 09:06:38.038774 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 09:06:38.098306 master-0 kubenswrapper[26053]: I0318 09:06:38.098243 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 18 09:06:38.111437 master-0 kubenswrapper[26053]: I0318 09:06:38.111396 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-mbtdj" Mar 18 09:06:38.334708 master-0 kubenswrapper[26053]: I0318 09:06:38.334476 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 09:06:38.402025 master-0 kubenswrapper[26053]: I0318 09:06:38.401946 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 09:06:38.503747 master-0 kubenswrapper[26053]: I0318 09:06:38.503671 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 18 09:06:38.588213 master-0 kubenswrapper[26053]: I0318 09:06:38.588095 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 18 09:06:38.600836 master-0 kubenswrapper[26053]: I0318 09:06:38.600779 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 18 09:06:38.745746 master-0 kubenswrapper[26053]: I0318 09:06:38.745696 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 09:06:38.763511 master-0 kubenswrapper[26053]: I0318 09:06:38.763437 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 18 09:06:38.773312 master-0 kubenswrapper[26053]: I0318 09:06:38.773191 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 09:06:38.915823 master-0 kubenswrapper[26053]: I0318 09:06:38.915680 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 18 09:06:38.936780 master-0 kubenswrapper[26053]: I0318 09:06:38.936712 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 09:06:39.174249 master-0 kubenswrapper[26053]: I0318 09:06:39.173880 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 09:06:39.285942 master-0 kubenswrapper[26053]: I0318 09:06:39.285893 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 18 09:06:39.482027 master-0 kubenswrapper[26053]: I0318 09:06:39.481940 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-fhncm" Mar 18 09:06:39.524871 master-0 kubenswrapper[26053]: I0318 09:06:39.524794 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 09:06:39.581346 master-0 kubenswrapper[26053]: I0318 09:06:39.581282 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 18 09:06:39.605815 master-0 kubenswrapper[26053]: I0318 09:06:39.605735 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 18 09:06:39.881200 master-0 kubenswrapper[26053]: I0318 09:06:39.881082 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 18 09:06:39.972071 master-0 kubenswrapper[26053]: I0318 09:06:39.972033 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 09:06:40.033174 master-0 kubenswrapper[26053]: I0318 09:06:40.033105 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 09:06:40.093259 master-0 kubenswrapper[26053]: I0318 09:06:40.093204 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 18 09:06:40.122807 master-0 kubenswrapper[26053]: I0318 09:06:40.122755 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 09:06:40.128652 master-0 kubenswrapper[26053]: I0318 09:06:40.128619 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 09:06:40.135176 master-0 kubenswrapper[26053]: I0318 09:06:40.135069 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 09:06:40.268231 master-0 kubenswrapper[26053]: I0318 09:06:40.268170 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-pfhv7" Mar 18 09:06:40.274714 master-0 kubenswrapper[26053]: I0318 09:06:40.274680 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 18 09:06:40.279588 master-0 kubenswrapper[26053]: I0318 09:06:40.279521 26053 patch_prober.go:28] interesting pod/console-7748c6b99d-fkjm5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 18 09:06:40.279687 master-0 kubenswrapper[26053]: I0318 09:06:40.279598 26053 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7748c6b99d-fkjm5" podUID="6a611129-8d70-4618-8512-8e0a3491353e" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 18 09:06:40.307600 master-0 kubenswrapper[26053]: I0318 09:06:40.307528 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 09:06:40.307847 master-0 kubenswrapper[26053]: I0318 09:06:40.307667 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-lgw5q" Mar 18 09:06:40.332541 master-0 kubenswrapper[26053]: I0318 09:06:40.332484 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-mbgdl" Mar 18 09:06:40.378055 master-0 kubenswrapper[26053]: I0318 09:06:40.377982 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 09:06:40.401432 master-0 kubenswrapper[26053]: I0318 09:06:40.401320 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 09:06:40.441896 master-0 kubenswrapper[26053]: I0318 09:06:40.441845 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 18 09:06:40.459856 master-0 kubenswrapper[26053]: I0318 09:06:40.459799 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 09:06:40.710025 master-0 kubenswrapper[26053]: I0318 09:06:40.709973 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-rwvl6" Mar 18 09:06:40.715249 master-0 kubenswrapper[26053]: I0318 09:06:40.715197 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 09:06:40.723643 master-0 kubenswrapper[26053]: I0318 09:06:40.723607 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 09:06:40.737933 master-0 kubenswrapper[26053]: I0318 09:06:40.737814 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 09:06:40.799407 master-0 kubenswrapper[26053]: I0318 09:06:40.799340 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 09:06:40.827329 master-0 kubenswrapper[26053]: I0318 09:06:40.825956 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 09:06:40.827832 master-0 kubenswrapper[26053]: I0318 09:06:40.827682 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 18 09:06:40.856164 master-0 kubenswrapper[26053]: I0318 09:06:40.856115 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 09:06:40.921362 master-0 kubenswrapper[26053]: I0318 09:06:40.921309 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-pj2bk" Mar 18 09:06:40.941358 master-0 kubenswrapper[26053]: I0318 09:06:40.941295 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 18 09:06:40.987975 master-0 kubenswrapper[26053]: I0318 09:06:40.987817 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 18 09:06:40.996211 master-0 kubenswrapper[26053]: I0318 09:06:40.996078 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 18 09:06:41.024319 master-0 kubenswrapper[26053]: I0318 09:06:41.024253 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 09:06:41.248292 master-0 kubenswrapper[26053]: I0318 09:06:41.248175 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 09:06:41.335222 master-0 kubenswrapper[26053]: I0318 09:06:41.335183 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 09:06:41.410486 master-0 kubenswrapper[26053]: I0318 09:06:41.410422 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 09:06:41.451289 master-0 kubenswrapper[26053]: I0318 09:06:41.451214 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 09:06:41.477440 master-0 kubenswrapper[26053]: I0318 09:06:41.477376 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 09:06:41.543481 master-0 kubenswrapper[26053]: I0318 09:06:41.543347 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 18 09:06:41.585504 master-0 kubenswrapper[26053]: I0318 09:06:41.585443 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 09:06:41.689201 master-0 kubenswrapper[26053]: I0318 09:06:41.689136 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 09:06:41.713306 master-0 kubenswrapper[26053]: I0318 09:06:41.713244 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 18 09:06:41.744620 master-0 kubenswrapper[26053]: I0318 09:06:41.739919 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 09:06:41.783971 master-0 kubenswrapper[26053]: I0318 09:06:41.783901 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 09:06:41.803770 master-0 kubenswrapper[26053]: I0318 09:06:41.803661 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 18 09:06:41.811838 master-0 kubenswrapper[26053]: I0318 09:06:41.811785 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 09:06:41.840918 master-0 kubenswrapper[26053]: I0318 09:06:41.840847 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 18 09:06:41.865627 master-0 kubenswrapper[26053]: I0318 09:06:41.865593 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 18 09:06:41.945975 master-0 kubenswrapper[26053]: I0318 09:06:41.945912 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-29bbg" Mar 18 09:06:41.950489 master-0 kubenswrapper[26053]: I0318 09:06:41.950404 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 18 09:06:41.962831 master-0 kubenswrapper[26053]: I0318 09:06:41.962769 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 18 09:06:41.969671 master-0 kubenswrapper[26053]: I0318 09:06:41.969546 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 09:06:41.989673 master-0 kubenswrapper[26053]: I0318 09:06:41.989560 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 09:06:42.024088 master-0 kubenswrapper[26053]: I0318 09:06:42.023985 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 18 09:06:42.035402 master-0 kubenswrapper[26053]: I0318 09:06:42.035309 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 09:06:42.066765 master-0 kubenswrapper[26053]: I0318 09:06:42.066632 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 18 09:06:42.121133 master-0 kubenswrapper[26053]: I0318 09:06:42.121062 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 09:06:42.184248 master-0 kubenswrapper[26053]: I0318 09:06:42.184186 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 09:06:42.251817 master-0 kubenswrapper[26053]: I0318 09:06:42.251762 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 09:06:42.281807 master-0 kubenswrapper[26053]: I0318 09:06:42.281753 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-658wv" Mar 18 09:06:42.497341 master-0 kubenswrapper[26053]: I0318 09:06:42.497288 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 09:06:42.548834 master-0 kubenswrapper[26053]: I0318 09:06:42.548758 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 18 09:06:42.600795 master-0 kubenswrapper[26053]: I0318 09:06:42.600701 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 09:06:42.634943 master-0 kubenswrapper[26053]: I0318 09:06:42.634865 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 18 09:06:42.669909 master-0 kubenswrapper[26053]: I0318 09:06:42.669833 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 09:06:42.694279 master-0 kubenswrapper[26053]: I0318 09:06:42.694171 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-4lqimvakop077" Mar 18 09:06:42.722266 master-0 kubenswrapper[26053]: I0318 09:06:42.722171 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-m2754" Mar 18 09:06:42.723097 master-0 kubenswrapper[26053]: I0318 09:06:42.722282 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 09:06:42.741435 master-0 kubenswrapper[26053]: I0318 09:06:42.741348 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 18 09:06:42.845696 master-0 kubenswrapper[26053]: I0318 09:06:42.845537 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 09:06:42.870454 master-0 kubenswrapper[26053]: I0318 09:06:42.870368 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 18 09:06:42.878806 master-0 kubenswrapper[26053]: I0318 09:06:42.878743 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 09:06:42.916235 master-0 kubenswrapper[26053]: I0318 09:06:42.916155 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 18 09:06:42.955528 master-0 kubenswrapper[26053]: I0318 09:06:42.955465 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 09:06:42.970260 master-0 kubenswrapper[26053]: I0318 09:06:42.970185 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 09:06:43.002988 master-0 kubenswrapper[26053]: I0318 09:06:43.002928 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 09:06:43.041954 master-0 kubenswrapper[26053]: I0318 09:06:43.041900 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 18 09:06:43.123759 master-0 kubenswrapper[26053]: I0318 09:06:43.123603 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 09:06:43.165754 master-0 kubenswrapper[26053]: I0318 09:06:43.165700 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 18 09:06:43.241140 master-0 kubenswrapper[26053]: I0318 09:06:43.241068 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 09:06:43.278629 master-0 kubenswrapper[26053]: I0318 09:06:43.278083 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 18 09:06:43.289656 master-0 kubenswrapper[26053]: I0318 09:06:43.289586 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 09:06:43.313643 master-0 kubenswrapper[26053]: I0318 09:06:43.313521 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 09:06:43.410534 master-0 kubenswrapper[26053]: I0318 09:06:43.410418 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 18 09:06:43.420527 master-0 kubenswrapper[26053]: I0318 09:06:43.420482 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-nvh22" Mar 18 09:06:43.426886 master-0 kubenswrapper[26053]: I0318 09:06:43.426853 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 18 09:06:43.505116 master-0 kubenswrapper[26053]: I0318 09:06:43.505050 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 18 09:06:43.528172 master-0 kubenswrapper[26053]: I0318 09:06:43.528109 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 09:06:43.638594 master-0 kubenswrapper[26053]: I0318 09:06:43.638539 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 18 09:06:43.659160 master-0 kubenswrapper[26053]: I0318 09:06:43.659125 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 09:06:43.668419 master-0 kubenswrapper[26053]: I0318 09:06:43.668317 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-xhpr4" Mar 18 09:06:43.700329 master-0 kubenswrapper[26053]: I0318 09:06:43.700271 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 09:06:43.738038 master-0 kubenswrapper[26053]: I0318 09:06:43.738003 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 18 09:06:43.791290 master-0 kubenswrapper[26053]: I0318 09:06:43.791246 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 09:06:43.879908 master-0 kubenswrapper[26053]: I0318 09:06:43.879858 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 18 09:06:43.917363 master-0 kubenswrapper[26053]: I0318 09:06:43.917294 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 18 09:06:43.947533 master-0 kubenswrapper[26053]: I0318 09:06:43.947429 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-tnmb8" Mar 18 09:06:43.997418 master-0 kubenswrapper[26053]: I0318 09:06:43.997374 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-pws99" Mar 18 09:06:44.070683 master-0 kubenswrapper[26053]: I0318 09:06:44.070642 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 09:06:44.108358 master-0 kubenswrapper[26053]: I0318 09:06:44.108278 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-khzbd" Mar 18 09:06:44.130017 master-0 kubenswrapper[26053]: I0318 09:06:44.129930 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 09:06:44.141874 master-0 kubenswrapper[26053]: I0318 09:06:44.141824 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 18 09:06:44.144272 master-0 kubenswrapper[26053]: I0318 09:06:44.144227 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 09:06:44.147229 master-0 kubenswrapper[26053]: I0318 09:06:44.147173 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 18 09:06:44.150840 master-0 kubenswrapper[26053]: I0318 09:06:44.150804 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 18 09:06:44.191787 master-0 kubenswrapper[26053]: I0318 09:06:44.191727 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 09:06:44.340538 master-0 kubenswrapper[26053]: I0318 09:06:44.340458 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 18 09:06:44.492548 master-0 kubenswrapper[26053]: I0318 09:06:44.492474 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 09:06:44.522242 master-0 kubenswrapper[26053]: I0318 09:06:44.522173 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 09:06:44.582877 master-0 kubenswrapper[26053]: I0318 09:06:44.582795 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 09:06:44.589923 master-0 kubenswrapper[26053]: I0318 09:06:44.589883 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 09:06:44.610344 master-0 kubenswrapper[26053]: I0318 09:06:44.610193 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 09:06:44.658329 master-0 kubenswrapper[26053]: I0318 09:06:44.658250 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-kldf7" Mar 18 09:06:44.725023 master-0 kubenswrapper[26053]: I0318 09:06:44.724955 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 09:06:44.731482 master-0 kubenswrapper[26053]: I0318 09:06:44.731419 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 18 09:06:44.771602 master-0 kubenswrapper[26053]: I0318 09:06:44.771516 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 09:06:44.853082 master-0 kubenswrapper[26053]: I0318 09:06:44.853005 26053 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 09:06:45.039845 master-0 kubenswrapper[26053]: I0318 09:06:45.039785 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 18 09:06:45.040136 master-0 kubenswrapper[26053]: I0318 09:06:45.039882 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 09:06:45.065114 master-0 kubenswrapper[26053]: I0318 09:06:45.065042 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 09:06:45.091019 master-0 kubenswrapper[26053]: I0318 09:06:45.090958 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 18 09:06:45.124724 master-0 kubenswrapper[26053]: I0318 09:06:45.124666 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 18 09:06:45.174824 master-0 kubenswrapper[26053]: I0318 09:06:45.174756 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-gx9ws" Mar 18 09:06:45.313129 master-0 kubenswrapper[26053]: I0318 09:06:45.312317 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 09:06:45.363913 master-0 kubenswrapper[26053]: I0318 09:06:45.363845 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 09:06:45.393734 master-0 kubenswrapper[26053]: I0318 09:06:45.393679 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-s4fhp" Mar 18 09:06:45.394282 master-0 kubenswrapper[26053]: I0318 09:06:45.394235 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 18 09:06:45.553057 master-0 kubenswrapper[26053]: I0318 09:06:45.553020 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 09:06:45.592387 master-0 kubenswrapper[26053]: I0318 09:06:45.592315 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 09:06:45.610180 master-0 kubenswrapper[26053]: I0318 09:06:45.609550 26053 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 09:06:45.616288 master-0 kubenswrapper[26053]: I0318 09:06:45.616173 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 09:06:45.620727 master-0 kubenswrapper[26053]: I0318 09:06:45.620672 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-7875f64c8-kmr8t","openshift-console/console-68c5849c7c-lqm2r","openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 09:06:45.620835 master-0 kubenswrapper[26053]: I0318 09:06:45.620782 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 09:06:45.628249 master-0 kubenswrapper[26053]: I0318 09:06:45.628211 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:06:45.653100 master-0 kubenswrapper[26053]: I0318 09:06:45.653002 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=21.652972083 podStartE2EDuration="21.652972083s" podCreationTimestamp="2026-03-18 09:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:06:45.647943685 +0000 UTC m=+193.141295086" watchObservedRunningTime="2026-03-18 09:06:45.652972083 +0000 UTC m=+193.146323504" Mar 18 09:06:45.663893 master-0 kubenswrapper[26053]: I0318 09:06:45.663813 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 09:06:45.803457 master-0 kubenswrapper[26053]: I0318 09:06:45.803368 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 18 09:06:45.818351 master-0 kubenswrapper[26053]: I0318 09:06:45.818244 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-546755554c-h5vql" Mar 18 09:06:45.825961 master-0 kubenswrapper[26053]: I0318 09:06:45.825923 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-546755554c-h5vql" Mar 18 09:06:45.836399 master-0 kubenswrapper[26053]: I0318 09:06:45.836352 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 18 09:06:45.838516 master-0 kubenswrapper[26053]: I0318 09:06:45.838480 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 09:06:45.870616 master-0 kubenswrapper[26053]: I0318 09:06:45.870576 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-mn6mb" Mar 18 09:06:45.931011 master-0 kubenswrapper[26053]: I0318 09:06:45.930965 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 18 09:06:45.961588 master-0 kubenswrapper[26053]: I0318 09:06:45.961516 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 18 09:06:45.980592 master-0 kubenswrapper[26053]: I0318 09:06:45.977985 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 09:06:46.040020 master-0 kubenswrapper[26053]: I0318 09:06:46.039956 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-hbb9q" Mar 18 09:06:46.083206 master-0 kubenswrapper[26053]: I0318 09:06:46.083102 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 09:06:46.091765 master-0 kubenswrapper[26053]: I0318 09:06:46.091723 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 18 09:06:46.098973 master-0 kubenswrapper[26053]: I0318 09:06:46.098922 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 09:06:46.108935 master-0 kubenswrapper[26053]: I0318 09:06:46.108890 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 09:06:46.118451 master-0 kubenswrapper[26053]: I0318 09:06:46.118403 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 18 09:06:46.127607 master-0 kubenswrapper[26053]: I0318 09:06:46.127555 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 09:06:46.150687 master-0 kubenswrapper[26053]: I0318 09:06:46.150649 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 09:06:46.274433 master-0 kubenswrapper[26053]: I0318 09:06:46.274370 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 09:06:46.283897 master-0 kubenswrapper[26053]: I0318 09:06:46.283827 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 09:06:46.302663 master-0 kubenswrapper[26053]: I0318 09:06:46.302586 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 18 09:06:46.326493 master-0 kubenswrapper[26053]: I0318 09:06:46.326421 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 18 09:06:46.329930 master-0 kubenswrapper[26053]: I0318 09:06:46.329893 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 09:06:46.337532 master-0 kubenswrapper[26053]: I0318 09:06:46.337419 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 09:06:46.356736 master-0 kubenswrapper[26053]: I0318 09:06:46.356678 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 18 09:06:46.422965 master-0 kubenswrapper[26053]: I0318 09:06:46.422900 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 18 09:06:46.468369 master-0 kubenswrapper[26053]: I0318 09:06:46.468295 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-l4xp6" Mar 18 09:06:46.470026 master-0 kubenswrapper[26053]: I0318 09:06:46.469993 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 09:06:46.508722 master-0 kubenswrapper[26053]: I0318 09:06:46.508652 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 09:06:46.511944 master-0 kubenswrapper[26053]: I0318 09:06:46.511884 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 09:06:46.554462 master-0 kubenswrapper[26053]: I0318 09:06:46.554379 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 09:06:46.595844 master-0 kubenswrapper[26053]: I0318 09:06:46.595672 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 09:06:46.614988 master-0 kubenswrapper[26053]: I0318 09:06:46.614905 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 09:06:46.636389 master-0 kubenswrapper[26053]: I0318 09:06:46.636328 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 09:06:46.641155 master-0 kubenswrapper[26053]: I0318 09:06:46.641053 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 09:06:46.659243 master-0 kubenswrapper[26053]: I0318 09:06:46.659151 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 09:06:46.664986 master-0 kubenswrapper[26053]: I0318 09:06:46.664929 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 18 09:06:46.711355 master-0 kubenswrapper[26053]: I0318 09:06:46.711266 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 09:06:46.727361 master-0 kubenswrapper[26053]: I0318 09:06:46.727279 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 09:06:46.740149 master-0 kubenswrapper[26053]: I0318 09:06:46.740050 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32425206-41b7-427e-8773-f650801d9d76" path="/var/lib/kubelet/pods/32425206-41b7-427e-8773-f650801d9d76/volumes" Mar 18 09:06:46.741190 master-0 kubenswrapper[26053]: I0318 09:06:46.741152 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87381a51-96e6-4e86-bdae-c8ac3fc7a039" path="/var/lib/kubelet/pods/87381a51-96e6-4e86-bdae-c8ac3fc7a039/volumes" Mar 18 09:06:46.814451 master-0 kubenswrapper[26053]: I0318 09:06:46.814368 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-kvnts" Mar 18 09:06:46.905174 master-0 kubenswrapper[26053]: I0318 09:06:46.904968 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 09:06:47.007809 master-0 kubenswrapper[26053]: I0318 09:06:47.007738 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 09:06:47.031223 master-0 kubenswrapper[26053]: I0318 09:06:47.031147 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 18 09:06:47.067206 master-0 kubenswrapper[26053]: I0318 09:06:47.067157 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 09:06:47.147949 master-0 kubenswrapper[26053]: I0318 09:06:47.147890 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 18 09:06:47.169867 master-0 kubenswrapper[26053]: I0318 09:06:47.169738 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 09:06:47.292396 master-0 kubenswrapper[26053]: I0318 09:06:47.292197 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-qwxs4" Mar 18 09:06:47.449968 master-0 kubenswrapper[26053]: I0318 09:06:47.449813 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-gfnn4" Mar 18 09:06:47.475258 master-0 kubenswrapper[26053]: I0318 09:06:47.475189 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 09:06:47.489384 master-0 kubenswrapper[26053]: I0318 09:06:47.489281 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-s9qtf" Mar 18 09:06:47.515238 master-0 kubenswrapper[26053]: I0318 09:06:47.515167 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6mthr" Mar 18 09:06:47.677435 master-0 kubenswrapper[26053]: I0318 09:06:47.677360 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 18 09:06:47.738639 master-0 kubenswrapper[26053]: I0318 09:06:47.738583 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 09:06:47.783218 master-0 kubenswrapper[26053]: I0318 09:06:47.783160 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 18 09:06:47.885061 master-0 kubenswrapper[26053]: I0318 09:06:47.884988 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 18 09:06:47.952695 master-0 kubenswrapper[26053]: I0318 09:06:47.952623 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 18 09:06:47.972407 master-0 kubenswrapper[26053]: I0318 09:06:47.972312 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 09:06:48.082422 master-0 kubenswrapper[26053]: I0318 09:06:48.082251 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 09:06:48.162149 master-0 kubenswrapper[26053]: I0318 09:06:48.162070 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 09:06:48.181219 master-0 kubenswrapper[26053]: I0318 09:06:48.181146 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 09:06:48.197514 master-0 kubenswrapper[26053]: I0318 09:06:48.197439 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-9xv2f" Mar 18 09:06:48.317757 master-0 kubenswrapper[26053]: I0318 09:06:48.317673 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-d6jf5" Mar 18 09:06:48.416537 master-0 kubenswrapper[26053]: I0318 09:06:48.416378 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 09:06:48.422944 master-0 kubenswrapper[26053]: I0318 09:06:48.422853 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 18 09:06:48.449882 master-0 kubenswrapper[26053]: I0318 09:06:48.449789 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 18 09:06:48.547209 master-0 kubenswrapper[26053]: I0318 09:06:48.547094 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-zlc9x" Mar 18 09:06:48.555131 master-0 kubenswrapper[26053]: I0318 09:06:48.555081 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-jqmlx" Mar 18 09:06:48.571082 master-0 kubenswrapper[26053]: I0318 09:06:48.570984 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 18 09:06:48.608363 master-0 kubenswrapper[26053]: I0318 09:06:48.608309 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 09:06:48.670261 master-0 kubenswrapper[26053]: I0318 09:06:48.670122 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 18 09:06:48.686880 master-0 kubenswrapper[26053]: I0318 09:06:48.686830 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-vvwvf" Mar 18 09:06:48.704872 master-0 kubenswrapper[26053]: I0318 09:06:48.704833 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 18 09:06:48.775756 master-0 kubenswrapper[26053]: I0318 09:06:48.775689 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 09:06:48.862025 master-0 kubenswrapper[26053]: I0318 09:06:48.861928 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 18 09:06:49.028112 master-0 kubenswrapper[26053]: I0318 09:06:49.026864 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-nxx2s" Mar 18 09:06:49.043002 master-0 kubenswrapper[26053]: I0318 09:06:49.042954 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 09:06:49.130764 master-0 kubenswrapper[26053]: I0318 09:06:49.130681 26053 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 09:06:49.163892 master-0 kubenswrapper[26053]: I0318 09:06:49.163814 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 09:06:49.239347 master-0 kubenswrapper[26053]: I0318 09:06:49.239242 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm"] Mar 18 09:06:49.239697 master-0 kubenswrapper[26053]: I0318 09:06:49.239624 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" podUID="94b229a5-7840-46fe-a221-85093a4f4a72" containerName="controller-manager" containerID="cri-o://581f350221bcc5babe731715938c7496901dd4d2837a12ccf73e4cdd96278feb" gracePeriod=30 Mar 18 09:06:49.275181 master-0 kubenswrapper[26053]: I0318 09:06:49.271173 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s"] Mar 18 09:06:49.275181 master-0 kubenswrapper[26053]: I0318 09:06:49.271626 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s" podUID="7d382cea-1da2-48b9-b151-36438d83ee30" containerName="route-controller-manager" containerID="cri-o://4dfbd1e6b0f4cf80a2a0317bc36c4bd133c8b496d75c7a3bc95daeb4ecc9574c" gracePeriod=30 Mar 18 09:06:49.341662 master-0 kubenswrapper[26053]: I0318 09:06:49.341504 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 09:06:49.367057 master-0 kubenswrapper[26053]: I0318 09:06:49.366980 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 09:06:49.396403 master-0 kubenswrapper[26053]: I0318 09:06:49.396058 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 09:06:49.420934 master-0 kubenswrapper[26053]: I0318 09:06:49.420878 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 09:06:49.454671 master-0 kubenswrapper[26053]: I0318 09:06:49.454618 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 18 09:06:49.456354 master-0 kubenswrapper[26053]: I0318 09:06:49.456316 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 18 09:06:49.508155 master-0 kubenswrapper[26053]: I0318 09:06:49.508102 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-m9g5m" Mar 18 09:06:49.509602 master-0 kubenswrapper[26053]: I0318 09:06:49.509559 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 09:06:49.582718 master-0 kubenswrapper[26053]: I0318 09:06:49.580546 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 18 09:06:49.585000 master-0 kubenswrapper[26053]: I0318 09:06:49.584478 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 18 09:06:49.634948 master-0 kubenswrapper[26053]: I0318 09:06:49.634876 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 09:06:49.655263 master-0 kubenswrapper[26053]: I0318 09:06:49.655185 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 18 09:06:49.668605 master-0 kubenswrapper[26053]: I0318 09:06:49.667857 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 18 09:06:49.763371 master-0 kubenswrapper[26053]: I0318 09:06:49.763254 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 18 09:06:49.774006 master-0 kubenswrapper[26053]: I0318 09:06:49.773906 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-tfl88"] Mar 18 09:06:49.774239 master-0 kubenswrapper[26053]: E0318 09:06:49.774211 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6030c175-df60-4af1-85b9-78a2cdc9f320" containerName="installer" Mar 18 09:06:49.774239 master-0 kubenswrapper[26053]: I0318 09:06:49.774225 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="6030c175-df60-4af1-85b9-78a2cdc9f320" containerName="installer" Mar 18 09:06:49.774337 master-0 kubenswrapper[26053]: E0318 09:06:49.774243 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87381a51-96e6-4e86-bdae-c8ac3fc7a039" containerName="metrics-server" Mar 18 09:06:49.774337 master-0 kubenswrapper[26053]: I0318 09:06:49.774250 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="87381a51-96e6-4e86-bdae-c8ac3fc7a039" containerName="metrics-server" Mar 18 09:06:49.774337 master-0 kubenswrapper[26053]: E0318 09:06:49.774261 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32425206-41b7-427e-8773-f650801d9d76" containerName="console" Mar 18 09:06:49.774337 master-0 kubenswrapper[26053]: I0318 09:06:49.774267 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="32425206-41b7-427e-8773-f650801d9d76" containerName="console" Mar 18 09:06:49.774496 master-0 kubenswrapper[26053]: I0318 09:06:49.774371 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="87381a51-96e6-4e86-bdae-c8ac3fc7a039" containerName="metrics-server" Mar 18 09:06:49.774496 master-0 kubenswrapper[26053]: I0318 09:06:49.774389 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="32425206-41b7-427e-8773-f650801d9d76" containerName="console" Mar 18 09:06:49.774496 master-0 kubenswrapper[26053]: I0318 09:06:49.774432 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="6030c175-df60-4af1-85b9-78a2cdc9f320" containerName="installer" Mar 18 09:06:49.774980 master-0 kubenswrapper[26053]: I0318 09:06:49.774932 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c6b76c555-tfl88" Mar 18 09:06:49.778852 master-0 kubenswrapper[26053]: I0318 09:06:49.778720 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 18 09:06:49.779954 master-0 kubenswrapper[26053]: I0318 09:06:49.778976 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 18 09:06:49.788013 master-0 kubenswrapper[26053]: I0318 09:06:49.787162 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-tfl88"] Mar 18 09:06:49.797345 master-0 kubenswrapper[26053]: I0318 09:06:49.797289 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 18 09:06:49.840341 master-0 kubenswrapper[26053]: I0318 09:06:49.840182 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-8zgz4" Mar 18 09:06:49.880426 master-0 kubenswrapper[26053]: I0318 09:06:49.880300 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c2e9661c-0359-460f-a97d-a06f2b572d23-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-tfl88\" (UID: \"c2e9661c-0359-460f-a97d-a06f2b572d23\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-tfl88" Mar 18 09:06:49.880624 master-0 kubenswrapper[26053]: I0318 09:06:49.880561 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c2e9661c-0359-460f-a97d-a06f2b572d23-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-tfl88\" (UID: \"c2e9661c-0359-460f-a97d-a06f2b572d23\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-tfl88" Mar 18 09:06:49.882667 master-0 kubenswrapper[26053]: I0318 09:06:49.882628 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 18 09:06:49.896405 master-0 kubenswrapper[26053]: I0318 09:06:49.896358 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" Mar 18 09:06:49.904060 master-0 kubenswrapper[26053]: I0318 09:06:49.904013 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s" Mar 18 09:06:49.909966 master-0 kubenswrapper[26053]: I0318 09:06:49.909929 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 09:06:49.982147 master-0 kubenswrapper[26053]: I0318 09:06:49.982086 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94b229a5-7840-46fe-a221-85093a4f4a72-config\") pod \"94b229a5-7840-46fe-a221-85093a4f4a72\" (UID: \"94b229a5-7840-46fe-a221-85093a4f4a72\") " Mar 18 09:06:49.982374 master-0 kubenswrapper[26053]: I0318 09:06:49.982163 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d382cea-1da2-48b9-b151-36438d83ee30-client-ca\") pod \"7d382cea-1da2-48b9-b151-36438d83ee30\" (UID: \"7d382cea-1da2-48b9-b151-36438d83ee30\") " Mar 18 09:06:49.982374 master-0 kubenswrapper[26053]: I0318 09:06:49.982190 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94b229a5-7840-46fe-a221-85093a4f4a72-client-ca\") pod \"94b229a5-7840-46fe-a221-85093a4f4a72\" (UID: \"94b229a5-7840-46fe-a221-85093a4f4a72\") " Mar 18 09:06:49.982374 master-0 kubenswrapper[26053]: I0318 09:06:49.982222 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/94b229a5-7840-46fe-a221-85093a4f4a72-proxy-ca-bundles\") pod \"94b229a5-7840-46fe-a221-85093a4f4a72\" (UID: \"94b229a5-7840-46fe-a221-85093a4f4a72\") " Mar 18 09:06:49.982374 master-0 kubenswrapper[26053]: I0318 09:06:49.982243 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtdxv\" (UniqueName: \"kubernetes.io/projected/7d382cea-1da2-48b9-b151-36438d83ee30-kube-api-access-vtdxv\") pod \"7d382cea-1da2-48b9-b151-36438d83ee30\" (UID: \"7d382cea-1da2-48b9-b151-36438d83ee30\") " Mar 18 09:06:49.982374 master-0 kubenswrapper[26053]: I0318 09:06:49.982266 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh5gx\" (UniqueName: \"kubernetes.io/projected/94b229a5-7840-46fe-a221-85093a4f4a72-kube-api-access-zh5gx\") pod \"94b229a5-7840-46fe-a221-85093a4f4a72\" (UID: \"94b229a5-7840-46fe-a221-85093a4f4a72\") " Mar 18 09:06:49.982374 master-0 kubenswrapper[26053]: I0318 09:06:49.982364 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94b229a5-7840-46fe-a221-85093a4f4a72-serving-cert\") pod \"94b229a5-7840-46fe-a221-85093a4f4a72\" (UID: \"94b229a5-7840-46fe-a221-85093a4f4a72\") " Mar 18 09:06:49.982679 master-0 kubenswrapper[26053]: I0318 09:06:49.982402 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d382cea-1da2-48b9-b151-36438d83ee30-config\") pod \"7d382cea-1da2-48b9-b151-36438d83ee30\" (UID: \"7d382cea-1da2-48b9-b151-36438d83ee30\") " Mar 18 09:06:49.982679 master-0 kubenswrapper[26053]: I0318 09:06:49.982446 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d382cea-1da2-48b9-b151-36438d83ee30-serving-cert\") pod \"7d382cea-1da2-48b9-b151-36438d83ee30\" (UID: \"7d382cea-1da2-48b9-b151-36438d83ee30\") " Mar 18 09:06:49.982763 master-0 kubenswrapper[26053]: I0318 09:06:49.982683 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c2e9661c-0359-460f-a97d-a06f2b572d23-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-tfl88\" (UID: \"c2e9661c-0359-460f-a97d-a06f2b572d23\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-tfl88" Mar 18 09:06:49.982763 master-0 kubenswrapper[26053]: I0318 09:06:49.982746 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c2e9661c-0359-460f-a97d-a06f2b572d23-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-tfl88\" (UID: \"c2e9661c-0359-460f-a97d-a06f2b572d23\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-tfl88" Mar 18 09:06:49.983704 master-0 kubenswrapper[26053]: I0318 09:06:49.983672 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/c2e9661c-0359-460f-a97d-a06f2b572d23-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-tfl88\" (UID: \"c2e9661c-0359-460f-a97d-a06f2b572d23\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-tfl88" Mar 18 09:06:49.984380 master-0 kubenswrapper[26053]: I0318 09:06:49.984326 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94b229a5-7840-46fe-a221-85093a4f4a72-config" (OuterVolumeSpecName: "config") pod "94b229a5-7840-46fe-a221-85093a4f4a72" (UID: "94b229a5-7840-46fe-a221-85093a4f4a72"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:06:49.985858 master-0 kubenswrapper[26053]: E0318 09:06:49.985358 26053 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 18 09:06:49.985858 master-0 kubenswrapper[26053]: E0318 09:06:49.985450 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2e9661c-0359-460f-a97d-a06f2b572d23-networking-console-plugin-cert podName:c2e9661c-0359-460f-a97d-a06f2b572d23 nodeName:}" failed. No retries permitted until 2026-03-18 09:06:50.485422584 +0000 UTC m=+197.978773985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/c2e9661c-0359-460f-a97d-a06f2b572d23-networking-console-plugin-cert") pod "networking-console-plugin-7c6b76c555-tfl88" (UID: "c2e9661c-0359-460f-a97d-a06f2b572d23") : secret "networking-console-plugin-cert" not found Mar 18 09:06:49.985858 master-0 kubenswrapper[26053]: I0318 09:06:49.985467 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94b229a5-7840-46fe-a221-85093a4f4a72-client-ca" (OuterVolumeSpecName: "client-ca") pod "94b229a5-7840-46fe-a221-85093a4f4a72" (UID: "94b229a5-7840-46fe-a221-85093a4f4a72"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:06:49.985858 master-0 kubenswrapper[26053]: I0318 09:06:49.985527 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d382cea-1da2-48b9-b151-36438d83ee30-config" (OuterVolumeSpecName: "config") pod "7d382cea-1da2-48b9-b151-36438d83ee30" (UID: "7d382cea-1da2-48b9-b151-36438d83ee30"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:06:49.985858 master-0 kubenswrapper[26053]: I0318 09:06:49.985721 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d382cea-1da2-48b9-b151-36438d83ee30-client-ca" (OuterVolumeSpecName: "client-ca") pod "7d382cea-1da2-48b9-b151-36438d83ee30" (UID: "7d382cea-1da2-48b9-b151-36438d83ee30"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:06:49.986423 master-0 kubenswrapper[26053]: I0318 09:06:49.986398 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94b229a5-7840-46fe-a221-85093a4f4a72-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "94b229a5-7840-46fe-a221-85093a4f4a72" (UID: "94b229a5-7840-46fe-a221-85093a4f4a72"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:06:49.987967 master-0 kubenswrapper[26053]: I0318 09:06:49.987554 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 09:06:49.987967 master-0 kubenswrapper[26053]: I0318 09:06:49.987872 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d382cea-1da2-48b9-b151-36438d83ee30-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7d382cea-1da2-48b9-b151-36438d83ee30" (UID: "7d382cea-1da2-48b9-b151-36438d83ee30"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:06:49.987967 master-0 kubenswrapper[26053]: I0318 09:06:49.987917 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94b229a5-7840-46fe-a221-85093a4f4a72-kube-api-access-zh5gx" (OuterVolumeSpecName: "kube-api-access-zh5gx") pod "94b229a5-7840-46fe-a221-85093a4f4a72" (UID: "94b229a5-7840-46fe-a221-85093a4f4a72"). InnerVolumeSpecName "kube-api-access-zh5gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:06:49.990726 master-0 kubenswrapper[26053]: I0318 09:06:49.990672 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94b229a5-7840-46fe-a221-85093a4f4a72-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "94b229a5-7840-46fe-a221-85093a4f4a72" (UID: "94b229a5-7840-46fe-a221-85093a4f4a72"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:06:49.990851 master-0 kubenswrapper[26053]: I0318 09:06:49.990793 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d382cea-1da2-48b9-b151-36438d83ee30-kube-api-access-vtdxv" (OuterVolumeSpecName: "kube-api-access-vtdxv") pod "7d382cea-1da2-48b9-b151-36438d83ee30" (UID: "7d382cea-1da2-48b9-b151-36438d83ee30"). InnerVolumeSpecName "kube-api-access-vtdxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:06:50.013490 master-0 kubenswrapper[26053]: I0318 09:06:50.013320 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 09:06:50.055231 master-0 kubenswrapper[26053]: I0318 09:06:50.054768 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 18 09:06:50.085253 master-0 kubenswrapper[26053]: I0318 09:06:50.085103 26053 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7d382cea-1da2-48b9-b151-36438d83ee30-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:50.085253 master-0 kubenswrapper[26053]: I0318 09:06:50.085147 26053 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/94b229a5-7840-46fe-a221-85093a4f4a72-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:50.085253 master-0 kubenswrapper[26053]: I0318 09:06:50.085158 26053 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/94b229a5-7840-46fe-a221-85093a4f4a72-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:50.085253 master-0 kubenswrapper[26053]: I0318 09:06:50.085171 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtdxv\" (UniqueName: \"kubernetes.io/projected/7d382cea-1da2-48b9-b151-36438d83ee30-kube-api-access-vtdxv\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:50.085253 master-0 kubenswrapper[26053]: I0318 09:06:50.085183 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh5gx\" (UniqueName: \"kubernetes.io/projected/94b229a5-7840-46fe-a221-85093a4f4a72-kube-api-access-zh5gx\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:50.085253 master-0 kubenswrapper[26053]: I0318 09:06:50.085194 26053 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94b229a5-7840-46fe-a221-85093a4f4a72-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:50.085253 master-0 kubenswrapper[26053]: I0318 09:06:50.085206 26053 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d382cea-1da2-48b9-b151-36438d83ee30-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:50.085253 master-0 kubenswrapper[26053]: I0318 09:06:50.085217 26053 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d382cea-1da2-48b9-b151-36438d83ee30-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:50.085253 master-0 kubenswrapper[26053]: I0318 09:06:50.085228 26053 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94b229a5-7840-46fe-a221-85093a4f4a72-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:06:50.110456 master-0 kubenswrapper[26053]: I0318 09:06:50.105040 26053 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 09:06:50.240590 master-0 kubenswrapper[26053]: I0318 09:06:50.240493 26053 generic.go:334] "Generic (PLEG): container finished" podID="94b229a5-7840-46fe-a221-85093a4f4a72" containerID="581f350221bcc5babe731715938c7496901dd4d2837a12ccf73e4cdd96278feb" exitCode=0 Mar 18 09:06:50.240862 master-0 kubenswrapper[26053]: I0318 09:06:50.240589 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" Mar 18 09:06:50.240862 master-0 kubenswrapper[26053]: I0318 09:06:50.240642 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" event={"ID":"94b229a5-7840-46fe-a221-85093a4f4a72","Type":"ContainerDied","Data":"581f350221bcc5babe731715938c7496901dd4d2837a12ccf73e4cdd96278feb"} Mar 18 09:06:50.240862 master-0 kubenswrapper[26053]: I0318 09:06:50.240722 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm" event={"ID":"94b229a5-7840-46fe-a221-85093a4f4a72","Type":"ContainerDied","Data":"ad0f96131c788e721ab29fe5861a0fee1e64255ac2bda2b065b890a5b75ebf53"} Mar 18 09:06:50.240862 master-0 kubenswrapper[26053]: I0318 09:06:50.240758 26053 scope.go:117] "RemoveContainer" containerID="581f350221bcc5babe731715938c7496901dd4d2837a12ccf73e4cdd96278feb" Mar 18 09:06:50.243719 master-0 kubenswrapper[26053]: I0318 09:06:50.243374 26053 generic.go:334] "Generic (PLEG): container finished" podID="7d382cea-1da2-48b9-b151-36438d83ee30" containerID="4dfbd1e6b0f4cf80a2a0317bc36c4bd133c8b496d75c7a3bc95daeb4ecc9574c" exitCode=0 Mar 18 09:06:50.243719 master-0 kubenswrapper[26053]: I0318 09:06:50.243437 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s" event={"ID":"7d382cea-1da2-48b9-b151-36438d83ee30","Type":"ContainerDied","Data":"4dfbd1e6b0f4cf80a2a0317bc36c4bd133c8b496d75c7a3bc95daeb4ecc9574c"} Mar 18 09:06:50.243719 master-0 kubenswrapper[26053]: I0318 09:06:50.243478 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s" event={"ID":"7d382cea-1da2-48b9-b151-36438d83ee30","Type":"ContainerDied","Data":"5c3362c5f580c6e27737d82422403afb52a8181ef4365cb5c8c417d64f1c3da8"} Mar 18 09:06:50.243719 master-0 kubenswrapper[26053]: I0318 09:06:50.243560 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s" Mar 18 09:06:50.284598 master-0 kubenswrapper[26053]: I0318 09:06:50.283827 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:06:50.284894 master-0 kubenswrapper[26053]: I0318 09:06:50.284751 26053 scope.go:117] "RemoveContainer" containerID="581f350221bcc5babe731715938c7496901dd4d2837a12ccf73e4cdd96278feb" Mar 18 09:06:50.291662 master-0 kubenswrapper[26053]: E0318 09:06:50.288627 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"581f350221bcc5babe731715938c7496901dd4d2837a12ccf73e4cdd96278feb\": container with ID starting with 581f350221bcc5babe731715938c7496901dd4d2837a12ccf73e4cdd96278feb not found: ID does not exist" containerID="581f350221bcc5babe731715938c7496901dd4d2837a12ccf73e4cdd96278feb" Mar 18 09:06:50.291662 master-0 kubenswrapper[26053]: I0318 09:06:50.288674 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"581f350221bcc5babe731715938c7496901dd4d2837a12ccf73e4cdd96278feb"} err="failed to get container status \"581f350221bcc5babe731715938c7496901dd4d2837a12ccf73e4cdd96278feb\": rpc error: code = NotFound desc = could not find container \"581f350221bcc5babe731715938c7496901dd4d2837a12ccf73e4cdd96278feb\": container with ID starting with 581f350221bcc5babe731715938c7496901dd4d2837a12ccf73e4cdd96278feb not found: ID does not exist" Mar 18 09:06:50.291662 master-0 kubenswrapper[26053]: I0318 09:06:50.288708 26053 scope.go:117] "RemoveContainer" containerID="4dfbd1e6b0f4cf80a2a0317bc36c4bd133c8b496d75c7a3bc95daeb4ecc9574c" Mar 18 09:06:50.291662 master-0 kubenswrapper[26053]: I0318 09:06:50.289082 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:06:50.330220 master-0 kubenswrapper[26053]: I0318 09:06:50.328339 26053 scope.go:117] "RemoveContainer" containerID="4dfbd1e6b0f4cf80a2a0317bc36c4bd133c8b496d75c7a3bc95daeb4ecc9574c" Mar 18 09:06:50.330220 master-0 kubenswrapper[26053]: E0318 09:06:50.328692 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dfbd1e6b0f4cf80a2a0317bc36c4bd133c8b496d75c7a3bc95daeb4ecc9574c\": container with ID starting with 4dfbd1e6b0f4cf80a2a0317bc36c4bd133c8b496d75c7a3bc95daeb4ecc9574c not found: ID does not exist" containerID="4dfbd1e6b0f4cf80a2a0317bc36c4bd133c8b496d75c7a3bc95daeb4ecc9574c" Mar 18 09:06:50.330220 master-0 kubenswrapper[26053]: I0318 09:06:50.328728 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dfbd1e6b0f4cf80a2a0317bc36c4bd133c8b496d75c7a3bc95daeb4ecc9574c"} err="failed to get container status \"4dfbd1e6b0f4cf80a2a0317bc36c4bd133c8b496d75c7a3bc95daeb4ecc9574c\": rpc error: code = NotFound desc = could not find container \"4dfbd1e6b0f4cf80a2a0317bc36c4bd133c8b496d75c7a3bc95daeb4ecc9574c\": container with ID starting with 4dfbd1e6b0f4cf80a2a0317bc36c4bd133c8b496d75c7a3bc95daeb4ecc9574c not found: ID does not exist" Mar 18 09:06:50.366463 master-0 kubenswrapper[26053]: I0318 09:06:50.366356 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm"] Mar 18 09:06:50.381622 master-0 kubenswrapper[26053]: I0318 09:06:50.380503 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-b6fbdfb5-hxtkm"] Mar 18 09:06:50.395613 master-0 kubenswrapper[26053]: I0318 09:06:50.394708 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 18 09:06:50.395806 master-0 kubenswrapper[26053]: I0318 09:06:50.395776 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s"] Mar 18 09:06:50.404433 master-0 kubenswrapper[26053]: I0318 09:06:50.402990 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59885f85db-7xg2s"] Mar 18 09:06:50.408400 master-0 kubenswrapper[26053]: I0318 09:06:50.408357 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-546755554c-h5vql"] Mar 18 09:06:50.417498 master-0 kubenswrapper[26053]: I0318 09:06:50.417460 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 18 09:06:50.445903 master-0 kubenswrapper[26053]: I0318 09:06:50.445852 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-688ff857d6-jwr6g"] Mar 18 09:06:50.447234 master-0 kubenswrapper[26053]: E0318 09:06:50.446398 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94b229a5-7840-46fe-a221-85093a4f4a72" containerName="controller-manager" Mar 18 09:06:50.447350 master-0 kubenswrapper[26053]: I0318 09:06:50.447335 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="94b229a5-7840-46fe-a221-85093a4f4a72" containerName="controller-manager" Mar 18 09:06:50.447453 master-0 kubenswrapper[26053]: E0318 09:06:50.447439 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d382cea-1da2-48b9-b151-36438d83ee30" containerName="route-controller-manager" Mar 18 09:06:50.447525 master-0 kubenswrapper[26053]: I0318 09:06:50.447512 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d382cea-1da2-48b9-b151-36438d83ee30" containerName="route-controller-manager" Mar 18 09:06:50.447747 master-0 kubenswrapper[26053]: I0318 09:06:50.447732 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d382cea-1da2-48b9-b151-36438d83ee30" containerName="route-controller-manager" Mar 18 09:06:50.447849 master-0 kubenswrapper[26053]: I0318 09:06:50.447838 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="94b229a5-7840-46fe-a221-85093a4f4a72" containerName="controller-manager" Mar 18 09:06:50.448385 master-0 kubenswrapper[26053]: I0318 09:06:50.448370 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" Mar 18 09:06:50.450755 master-0 kubenswrapper[26053]: I0318 09:06:50.450711 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4"] Mar 18 09:06:50.451997 master-0 kubenswrapper[26053]: I0318 09:06:50.451962 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4" Mar 18 09:06:50.452109 master-0 kubenswrapper[26053]: I0318 09:06:50.452077 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 09:06:50.452283 master-0 kubenswrapper[26053]: I0318 09:06:50.452260 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 09:06:50.453514 master-0 kubenswrapper[26053]: I0318 09:06:50.453479 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 09:06:50.453590 master-0 kubenswrapper[26053]: I0318 09:06:50.453512 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-rwvl6" Mar 18 09:06:50.453855 master-0 kubenswrapper[26053]: I0318 09:06:50.453824 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 09:06:50.454095 master-0 kubenswrapper[26053]: I0318 09:06:50.454080 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 09:06:50.454681 master-0 kubenswrapper[26053]: I0318 09:06:50.454631 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-688ff857d6-jwr6g"] Mar 18 09:06:50.455849 master-0 kubenswrapper[26053]: I0318 09:06:50.455548 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 09:06:50.456040 master-0 kubenswrapper[26053]: I0318 09:06:50.456006 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 09:06:50.456149 master-0 kubenswrapper[26053]: I0318 09:06:50.456009 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 09:06:50.457074 master-0 kubenswrapper[26053]: I0318 09:06:50.456952 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-8zgz4" Mar 18 09:06:50.457230 master-0 kubenswrapper[26053]: I0318 09:06:50.457182 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 09:06:50.457390 master-0 kubenswrapper[26053]: I0318 09:06:50.457328 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 09:06:50.458805 master-0 kubenswrapper[26053]: I0318 09:06:50.458759 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 09:06:50.469986 master-0 kubenswrapper[26053]: I0318 09:06:50.469946 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4"] Mar 18 09:06:50.492292 master-0 kubenswrapper[26053]: I0318 09:06:50.492243 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f7k5\" (UniqueName: \"kubernetes.io/projected/01225b27-7346-42c4-82c8-41dd15efbce6-kube-api-access-2f7k5\") pod \"route-controller-manager-6f6764dfff-sj6m4\" (UID: \"01225b27-7346-42c4-82c8-41dd15efbce6\") " pod="openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4" Mar 18 09:06:50.492380 master-0 kubenswrapper[26053]: I0318 09:06:50.492310 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01225b27-7346-42c4-82c8-41dd15efbce6-config\") pod \"route-controller-manager-6f6764dfff-sj6m4\" (UID: \"01225b27-7346-42c4-82c8-41dd15efbce6\") " pod="openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4" Mar 18 09:06:50.492380 master-0 kubenswrapper[26053]: I0318 09:06:50.492354 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c2e9661c-0359-460f-a97d-a06f2b572d23-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-tfl88\" (UID: \"c2e9661c-0359-460f-a97d-a06f2b572d23\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-tfl88" Mar 18 09:06:50.492449 master-0 kubenswrapper[26053]: I0318 09:06:50.492389 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01225b27-7346-42c4-82c8-41dd15efbce6-serving-cert\") pod \"route-controller-manager-6f6764dfff-sj6m4\" (UID: \"01225b27-7346-42c4-82c8-41dd15efbce6\") " pod="openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4" Mar 18 09:06:50.492449 master-0 kubenswrapper[26053]: I0318 09:06:50.492425 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01225b27-7346-42c4-82c8-41dd15efbce6-client-ca\") pod \"route-controller-manager-6f6764dfff-sj6m4\" (UID: \"01225b27-7346-42c4-82c8-41dd15efbce6\") " pod="openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4" Mar 18 09:06:50.492510 master-0 kubenswrapper[26053]: I0318 09:06:50.492458 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n4hw\" (UniqueName: \"kubernetes.io/projected/62b54392-c01e-4d5d-9245-0a8ec8ff800b-kube-api-access-5n4hw\") pod \"controller-manager-688ff857d6-jwr6g\" (UID: \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\") " pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" Mar 18 09:06:50.492510 master-0 kubenswrapper[26053]: I0318 09:06:50.492485 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62b54392-c01e-4d5d-9245-0a8ec8ff800b-proxy-ca-bundles\") pod \"controller-manager-688ff857d6-jwr6g\" (UID: \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\") " pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" Mar 18 09:06:50.492600 master-0 kubenswrapper[26053]: I0318 09:06:50.492524 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62b54392-c01e-4d5d-9245-0a8ec8ff800b-client-ca\") pod \"controller-manager-688ff857d6-jwr6g\" (UID: \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\") " pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" Mar 18 09:06:50.492600 master-0 kubenswrapper[26053]: I0318 09:06:50.492554 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62b54392-c01e-4d5d-9245-0a8ec8ff800b-config\") pod \"controller-manager-688ff857d6-jwr6g\" (UID: \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\") " pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" Mar 18 09:06:50.492600 master-0 kubenswrapper[26053]: I0318 09:06:50.492593 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62b54392-c01e-4d5d-9245-0a8ec8ff800b-serving-cert\") pod \"controller-manager-688ff857d6-jwr6g\" (UID: \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\") " pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" Mar 18 09:06:50.492812 master-0 kubenswrapper[26053]: E0318 09:06:50.492763 26053 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 18 09:06:50.492856 master-0 kubenswrapper[26053]: E0318 09:06:50.492813 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2e9661c-0359-460f-a97d-a06f2b572d23-networking-console-plugin-cert podName:c2e9661c-0359-460f-a97d-a06f2b572d23 nodeName:}" failed. No retries permitted until 2026-03-18 09:06:51.492796137 +0000 UTC m=+198.986147528 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/c2e9661c-0359-460f-a97d-a06f2b572d23-networking-console-plugin-cert") pod "networking-console-plugin-7c6b76c555-tfl88" (UID: "c2e9661c-0359-460f-a97d-a06f2b572d23") : secret "networking-console-plugin-cert" not found Mar 18 09:06:50.594300 master-0 kubenswrapper[26053]: I0318 09:06:50.594179 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01225b27-7346-42c4-82c8-41dd15efbce6-client-ca\") pod \"route-controller-manager-6f6764dfff-sj6m4\" (UID: \"01225b27-7346-42c4-82c8-41dd15efbce6\") " pod="openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4" Mar 18 09:06:50.594451 master-0 kubenswrapper[26053]: I0318 09:06:50.594300 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n4hw\" (UniqueName: \"kubernetes.io/projected/62b54392-c01e-4d5d-9245-0a8ec8ff800b-kube-api-access-5n4hw\") pod \"controller-manager-688ff857d6-jwr6g\" (UID: \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\") " pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" Mar 18 09:06:50.594451 master-0 kubenswrapper[26053]: I0318 09:06:50.594363 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62b54392-c01e-4d5d-9245-0a8ec8ff800b-proxy-ca-bundles\") pod \"controller-manager-688ff857d6-jwr6g\" (UID: \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\") " pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" Mar 18 09:06:50.595074 master-0 kubenswrapper[26053]: I0318 09:06:50.594817 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62b54392-c01e-4d5d-9245-0a8ec8ff800b-client-ca\") pod \"controller-manager-688ff857d6-jwr6g\" (UID: \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\") " pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" Mar 18 09:06:50.595074 master-0 kubenswrapper[26053]: I0318 09:06:50.594952 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62b54392-c01e-4d5d-9245-0a8ec8ff800b-config\") pod \"controller-manager-688ff857d6-jwr6g\" (UID: \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\") " pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" Mar 18 09:06:50.595074 master-0 kubenswrapper[26053]: I0318 09:06:50.595004 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62b54392-c01e-4d5d-9245-0a8ec8ff800b-serving-cert\") pod \"controller-manager-688ff857d6-jwr6g\" (UID: \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\") " pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" Mar 18 09:06:50.595790 master-0 kubenswrapper[26053]: I0318 09:06:50.595719 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f7k5\" (UniqueName: \"kubernetes.io/projected/01225b27-7346-42c4-82c8-41dd15efbce6-kube-api-access-2f7k5\") pod \"route-controller-manager-6f6764dfff-sj6m4\" (UID: \"01225b27-7346-42c4-82c8-41dd15efbce6\") " pod="openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4" Mar 18 09:06:50.597115 master-0 kubenswrapper[26053]: I0318 09:06:50.596020 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01225b27-7346-42c4-82c8-41dd15efbce6-config\") pod \"route-controller-manager-6f6764dfff-sj6m4\" (UID: \"01225b27-7346-42c4-82c8-41dd15efbce6\") " pod="openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4" Mar 18 09:06:50.597115 master-0 kubenswrapper[26053]: I0318 09:06:50.596264 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01225b27-7346-42c4-82c8-41dd15efbce6-serving-cert\") pod \"route-controller-manager-6f6764dfff-sj6m4\" (UID: \"01225b27-7346-42c4-82c8-41dd15efbce6\") " pod="openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4" Mar 18 09:06:50.597115 master-0 kubenswrapper[26053]: I0318 09:06:50.596323 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01225b27-7346-42c4-82c8-41dd15efbce6-client-ca\") pod \"route-controller-manager-6f6764dfff-sj6m4\" (UID: \"01225b27-7346-42c4-82c8-41dd15efbce6\") " pod="openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4" Mar 18 09:06:50.597115 master-0 kubenswrapper[26053]: I0318 09:06:50.596065 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62b54392-c01e-4d5d-9245-0a8ec8ff800b-proxy-ca-bundles\") pod \"controller-manager-688ff857d6-jwr6g\" (UID: \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\") " pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" Mar 18 09:06:50.597115 master-0 kubenswrapper[26053]: I0318 09:06:50.596650 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62b54392-c01e-4d5d-9245-0a8ec8ff800b-config\") pod \"controller-manager-688ff857d6-jwr6g\" (UID: \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\") " pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" Mar 18 09:06:50.597115 master-0 kubenswrapper[26053]: I0318 09:06:50.596904 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62b54392-c01e-4d5d-9245-0a8ec8ff800b-client-ca\") pod \"controller-manager-688ff857d6-jwr6g\" (UID: \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\") " pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" Mar 18 09:06:50.597558 master-0 kubenswrapper[26053]: I0318 09:06:50.597134 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01225b27-7346-42c4-82c8-41dd15efbce6-config\") pod \"route-controller-manager-6f6764dfff-sj6m4\" (UID: \"01225b27-7346-42c4-82c8-41dd15efbce6\") " pod="openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4" Mar 18 09:06:50.600804 master-0 kubenswrapper[26053]: I0318 09:06:50.599958 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62b54392-c01e-4d5d-9245-0a8ec8ff800b-serving-cert\") pod \"controller-manager-688ff857d6-jwr6g\" (UID: \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\") " pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" Mar 18 09:06:50.600804 master-0 kubenswrapper[26053]: I0318 09:06:50.600739 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01225b27-7346-42c4-82c8-41dd15efbce6-serving-cert\") pod \"route-controller-manager-6f6764dfff-sj6m4\" (UID: \"01225b27-7346-42c4-82c8-41dd15efbce6\") " pod="openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4" Mar 18 09:06:50.620631 master-0 kubenswrapper[26053]: I0318 09:06:50.620462 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f7k5\" (UniqueName: \"kubernetes.io/projected/01225b27-7346-42c4-82c8-41dd15efbce6-kube-api-access-2f7k5\") pod \"route-controller-manager-6f6764dfff-sj6m4\" (UID: \"01225b27-7346-42c4-82c8-41dd15efbce6\") " pod="openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4" Mar 18 09:06:50.623681 master-0 kubenswrapper[26053]: I0318 09:06:50.623624 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n4hw\" (UniqueName: \"kubernetes.io/projected/62b54392-c01e-4d5d-9245-0a8ec8ff800b-kube-api-access-5n4hw\") pod \"controller-manager-688ff857d6-jwr6g\" (UID: \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\") " pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" Mar 18 09:06:50.630725 master-0 kubenswrapper[26053]: I0318 09:06:50.630681 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 09:06:50.644251 master-0 kubenswrapper[26053]: I0318 09:06:50.644216 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 18 09:06:50.739658 master-0 kubenswrapper[26053]: I0318 09:06:50.739542 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d382cea-1da2-48b9-b151-36438d83ee30" path="/var/lib/kubelet/pods/7d382cea-1da2-48b9-b151-36438d83ee30/volumes" Mar 18 09:06:50.741005 master-0 kubenswrapper[26053]: I0318 09:06:50.740954 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94b229a5-7840-46fe-a221-85093a4f4a72" path="/var/lib/kubelet/pods/94b229a5-7840-46fe-a221-85093a4f4a72/volumes" Mar 18 09:06:50.776999 master-0 kubenswrapper[26053]: I0318 09:06:50.776363 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" Mar 18 09:06:50.792969 master-0 kubenswrapper[26053]: I0318 09:06:50.792895 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 18 09:06:50.803715 master-0 kubenswrapper[26053]: I0318 09:06:50.801766 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 18 09:06:50.806913 master-0 kubenswrapper[26053]: I0318 09:06:50.806797 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4" Mar 18 09:06:51.070128 master-0 kubenswrapper[26053]: I0318 09:06:51.070085 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 09:06:51.198737 master-0 kubenswrapper[26053]: I0318 09:06:51.198588 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 18 09:06:51.316721 master-0 kubenswrapper[26053]: I0318 09:06:51.316659 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-688ff857d6-jwr6g"] Mar 18 09:06:51.364103 master-0 kubenswrapper[26053]: I0318 09:06:51.363709 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4"] Mar 18 09:06:51.367818 master-0 kubenswrapper[26053]: W0318 09:06:51.367776 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01225b27_7346_42c4_82c8_41dd15efbce6.slice/crio-30fafde9c721c27794681988c6ac40fac0c4dfc0bf2ee68b21bc0fd510c2142d WatchSource:0}: Error finding container 30fafde9c721c27794681988c6ac40fac0c4dfc0bf2ee68b21bc0fd510c2142d: Status 404 returned error can't find the container with id 30fafde9c721c27794681988c6ac40fac0c4dfc0bf2ee68b21bc0fd510c2142d Mar 18 09:06:51.392173 master-0 kubenswrapper[26053]: I0318 09:06:51.392030 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 18 09:06:51.515709 master-0 kubenswrapper[26053]: I0318 09:06:51.515660 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c2e9661c-0359-460f-a97d-a06f2b572d23-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-tfl88\" (UID: \"c2e9661c-0359-460f-a97d-a06f2b572d23\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-tfl88" Mar 18 09:06:51.515894 master-0 kubenswrapper[26053]: E0318 09:06:51.515850 26053 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 18 09:06:51.515968 master-0 kubenswrapper[26053]: E0318 09:06:51.515951 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2e9661c-0359-460f-a97d-a06f2b572d23-networking-console-plugin-cert podName:c2e9661c-0359-460f-a97d-a06f2b572d23 nodeName:}" failed. No retries permitted until 2026-03-18 09:06:53.515933478 +0000 UTC m=+201.009284859 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/c2e9661c-0359-460f-a97d-a06f2b572d23-networking-console-plugin-cert") pod "networking-console-plugin-7c6b76c555-tfl88" (UID: "c2e9661c-0359-460f-a97d-a06f2b572d23") : secret "networking-console-plugin-cert" not found Mar 18 09:06:51.707653 master-0 kubenswrapper[26053]: I0318 09:06:51.707606 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 09:06:51.852834 master-0 kubenswrapper[26053]: I0318 09:06:51.852765 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 18 09:06:52.132786 master-0 kubenswrapper[26053]: I0318 09:06:52.132650 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 18 09:06:52.267431 master-0 kubenswrapper[26053]: I0318 09:06:52.267363 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4" event={"ID":"01225b27-7346-42c4-82c8-41dd15efbce6","Type":"ContainerStarted","Data":"0936c170059288a289391bc4ea2bf8e4554b9341cf917449311ada2ece9066a2"} Mar 18 09:06:52.267431 master-0 kubenswrapper[26053]: I0318 09:06:52.267428 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4" event={"ID":"01225b27-7346-42c4-82c8-41dd15efbce6","Type":"ContainerStarted","Data":"30fafde9c721c27794681988c6ac40fac0c4dfc0bf2ee68b21bc0fd510c2142d"} Mar 18 09:06:52.267715 master-0 kubenswrapper[26053]: I0318 09:06:52.267592 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4" Mar 18 09:06:52.268860 master-0 kubenswrapper[26053]: I0318 09:06:52.268828 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" event={"ID":"62b54392-c01e-4d5d-9245-0a8ec8ff800b","Type":"ContainerStarted","Data":"92168d9d7c54ee0acedf12086cb0af9b9a961c385d5b23627bf9d819aaca1822"} Mar 18 09:06:52.268860 master-0 kubenswrapper[26053]: I0318 09:06:52.268857 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" event={"ID":"62b54392-c01e-4d5d-9245-0a8ec8ff800b","Type":"ContainerStarted","Data":"bb156d03f8475f40ee0b52dbceb3b772b1e72f244f53f64bf9f5da9236b4a6bf"} Mar 18 09:06:52.269586 master-0 kubenswrapper[26053]: I0318 09:06:52.269538 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" Mar 18 09:06:52.278990 master-0 kubenswrapper[26053]: I0318 09:06:52.278954 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" Mar 18 09:06:52.279211 master-0 kubenswrapper[26053]: I0318 09:06:52.279060 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4" Mar 18 09:06:52.293400 master-0 kubenswrapper[26053]: I0318 09:06:52.293328 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4" podStartSLOduration=3.293308369 podStartE2EDuration="3.293308369s" podCreationTimestamp="2026-03-18 09:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:06:52.292065027 +0000 UTC m=+199.785416428" watchObservedRunningTime="2026-03-18 09:06:52.293308369 +0000 UTC m=+199.786659750" Mar 18 09:06:52.312817 master-0 kubenswrapper[26053]: I0318 09:06:52.312721 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" podStartSLOduration=3.3127011729999998 podStartE2EDuration="3.312701173s" podCreationTimestamp="2026-03-18 09:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:06:52.310752773 +0000 UTC m=+199.804104214" watchObservedRunningTime="2026-03-18 09:06:52.312701173 +0000 UTC m=+199.806052554" Mar 18 09:06:52.431930 master-0 kubenswrapper[26053]: I0318 09:06:52.431740 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 09:06:52.674795 master-0 kubenswrapper[26053]: I0318 09:06:52.674733 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 18 09:06:52.679966 master-0 kubenswrapper[26053]: I0318 09:06:52.679921 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 18 09:06:52.974705 master-0 kubenswrapper[26053]: I0318 09:06:52.974636 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 18 09:06:53.065412 master-0 kubenswrapper[26053]: I0318 09:06:53.065318 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 09:06:53.535846 master-0 kubenswrapper[26053]: I0318 09:06:53.535787 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 09:06:53.555461 master-0 kubenswrapper[26053]: I0318 09:06:53.555360 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c2e9661c-0359-460f-a97d-a06f2b572d23-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-tfl88\" (UID: \"c2e9661c-0359-460f-a97d-a06f2b572d23\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-tfl88" Mar 18 09:06:53.555745 master-0 kubenswrapper[26053]: E0318 09:06:53.555691 26053 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 18 09:06:53.556032 master-0 kubenswrapper[26053]: E0318 09:06:53.555991 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2e9661c-0359-460f-a97d-a06f2b572d23-networking-console-plugin-cert podName:c2e9661c-0359-460f-a97d-a06f2b572d23 nodeName:}" failed. No retries permitted until 2026-03-18 09:06:57.555957939 +0000 UTC m=+205.049309350 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/c2e9661c-0359-460f-a97d-a06f2b572d23-networking-console-plugin-cert") pod "networking-console-plugin-7c6b76c555-tfl88" (UID: "c2e9661c-0359-460f-a97d-a06f2b572d23") : secret "networking-console-plugin-cert" not found Mar 18 09:06:55.095347 master-0 kubenswrapper[26053]: I0318 09:06:55.095285 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 18 09:06:56.839951 master-0 kubenswrapper[26053]: I0318 09:06:56.839837 26053 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 09:06:56.841013 master-0 kubenswrapper[26053]: I0318 09:06:56.840280 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" containerName="startup-monitor" containerID="cri-o://17fe525ef9fd969ea224700d998daa2ed4c945cd5dea489ea725d4fcd88fbd4a" gracePeriod=5 Mar 18 09:06:57.624876 master-0 kubenswrapper[26053]: I0318 09:06:57.624781 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c2e9661c-0359-460f-a97d-a06f2b572d23-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-tfl88\" (UID: \"c2e9661c-0359-460f-a97d-a06f2b572d23\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-tfl88" Mar 18 09:06:57.625139 master-0 kubenswrapper[26053]: E0318 09:06:57.625068 26053 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 18 09:06:57.625243 master-0 kubenswrapper[26053]: E0318 09:06:57.625205 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2e9661c-0359-460f-a97d-a06f2b572d23-networking-console-plugin-cert podName:c2e9661c-0359-460f-a97d-a06f2b572d23 nodeName:}" failed. No retries permitted until 2026-03-18 09:07:05.625173557 +0000 UTC m=+213.118524978 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/c2e9661c-0359-460f-a97d-a06f2b572d23-networking-console-plugin-cert") pod "networking-console-plugin-7c6b76c555-tfl88" (UID: "c2e9661c-0359-460f-a97d-a06f2b572d23") : secret "networking-console-plugin-cert" not found Mar 18 09:07:02.359943 master-0 kubenswrapper[26053]: I0318 09:07:02.359828 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_16fb4ea7f83036d9c6adf3454fc7e9db/startup-monitor/0.log" Mar 18 09:07:02.359943 master-0 kubenswrapper[26053]: I0318 09:07:02.359902 26053 generic.go:334] "Generic (PLEG): container finished" podID="16fb4ea7f83036d9c6adf3454fc7e9db" containerID="17fe525ef9fd969ea224700d998daa2ed4c945cd5dea489ea725d4fcd88fbd4a" exitCode=137 Mar 18 09:07:02.454231 master-0 kubenswrapper[26053]: I0318 09:07:02.454177 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_16fb4ea7f83036d9c6adf3454fc7e9db/startup-monitor/0.log" Mar 18 09:07:02.454457 master-0 kubenswrapper[26053]: I0318 09:07:02.454277 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:07:02.505191 master-0 kubenswrapper[26053]: I0318 09:07:02.505097 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 18 09:07:02.505431 master-0 kubenswrapper[26053]: I0318 09:07:02.505383 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 18 09:07:02.505716 master-0 kubenswrapper[26053]: I0318 09:07:02.505654 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:07:02.505782 master-0 kubenswrapper[26053]: I0318 09:07:02.505715 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests" (OuterVolumeSpecName: "manifests") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:07:02.505868 master-0 kubenswrapper[26053]: I0318 09:07:02.505552 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 18 09:07:02.506040 master-0 kubenswrapper[26053]: I0318 09:07:02.506003 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 18 09:07:02.506120 master-0 kubenswrapper[26053]: I0318 09:07:02.506096 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log" (OuterVolumeSpecName: "var-log") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:07:02.506182 master-0 kubenswrapper[26053]: I0318 09:07:02.506128 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") pod \"16fb4ea7f83036d9c6adf3454fc7e9db\" (UID: \"16fb4ea7f83036d9c6adf3454fc7e9db\") " Mar 18 09:07:02.506335 master-0 kubenswrapper[26053]: I0318 09:07:02.506278 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock" (OuterVolumeSpecName: "var-lock") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:07:02.507942 master-0 kubenswrapper[26053]: I0318 09:07:02.507857 26053 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-manifests\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:02.508036 master-0 kubenswrapper[26053]: I0318 09:07:02.507938 26053 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:02.508036 master-0 kubenswrapper[26053]: I0318 09:07:02.507974 26053 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-log\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:02.508036 master-0 kubenswrapper[26053]: I0318 09:07:02.507999 26053 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:02.513343 master-0 kubenswrapper[26053]: I0318 09:07:02.513276 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "16fb4ea7f83036d9c6adf3454fc7e9db" (UID: "16fb4ea7f83036d9c6adf3454fc7e9db"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:07:02.610217 master-0 kubenswrapper[26053]: I0318 09:07:02.610022 26053 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/16fb4ea7f83036d9c6adf3454fc7e9db-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:02.748995 master-0 kubenswrapper[26053]: I0318 09:07:02.748916 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" path="/var/lib/kubelet/pods/16fb4ea7f83036d9c6adf3454fc7e9db/volumes" Mar 18 09:07:03.376843 master-0 kubenswrapper[26053]: I0318 09:07:03.376779 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_16fb4ea7f83036d9c6adf3454fc7e9db/startup-monitor/0.log" Mar 18 09:07:03.377529 master-0 kubenswrapper[26053]: I0318 09:07:03.376909 26053 scope.go:117] "RemoveContainer" containerID="17fe525ef9fd969ea224700d998daa2ed4c945cd5dea489ea725d4fcd88fbd4a" Mar 18 09:07:03.377529 master-0 kubenswrapper[26053]: I0318 09:07:03.376997 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:07:05.658533 master-0 kubenswrapper[26053]: I0318 09:07:05.658451 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c2e9661c-0359-460f-a97d-a06f2b572d23-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-tfl88\" (UID: \"c2e9661c-0359-460f-a97d-a06f2b572d23\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-tfl88" Mar 18 09:07:05.659652 master-0 kubenswrapper[26053]: E0318 09:07:05.658662 26053 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 18 09:07:05.659791 master-0 kubenswrapper[26053]: E0318 09:07:05.659680 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2e9661c-0359-460f-a97d-a06f2b572d23-networking-console-plugin-cert podName:c2e9661c-0359-460f-a97d-a06f2b572d23 nodeName:}" failed. No retries permitted until 2026-03-18 09:07:21.6596541 +0000 UTC m=+229.153005521 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/c2e9661c-0359-460f-a97d-a06f2b572d23-networking-console-plugin-cert") pod "networking-console-plugin-7c6b76c555-tfl88" (UID: "c2e9661c-0359-460f-a97d-a06f2b572d23") : secret "networking-console-plugin-cert" not found Mar 18 09:07:09.238961 master-0 kubenswrapper[26053]: I0318 09:07:09.238902 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-688ff857d6-jwr6g"] Mar 18 09:07:09.239794 master-0 kubenswrapper[26053]: I0318 09:07:09.239145 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" podUID="62b54392-c01e-4d5d-9245-0a8ec8ff800b" containerName="controller-manager" containerID="cri-o://92168d9d7c54ee0acedf12086cb0af9b9a961c385d5b23627bf9d819aaca1822" gracePeriod=30 Mar 18 09:07:09.263762 master-0 kubenswrapper[26053]: I0318 09:07:09.263695 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4"] Mar 18 09:07:09.263997 master-0 kubenswrapper[26053]: I0318 09:07:09.263918 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4" podUID="01225b27-7346-42c4-82c8-41dd15efbce6" containerName="route-controller-manager" containerID="cri-o://0936c170059288a289391bc4ea2bf8e4554b9341cf917449311ada2ece9066a2" gracePeriod=30 Mar 18 09:07:09.423893 master-0 kubenswrapper[26053]: I0318 09:07:09.423553 26053 generic.go:334] "Generic (PLEG): container finished" podID="01225b27-7346-42c4-82c8-41dd15efbce6" containerID="0936c170059288a289391bc4ea2bf8e4554b9341cf917449311ada2ece9066a2" exitCode=0 Mar 18 09:07:09.423893 master-0 kubenswrapper[26053]: I0318 09:07:09.423643 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4" event={"ID":"01225b27-7346-42c4-82c8-41dd15efbce6","Type":"ContainerDied","Data":"0936c170059288a289391bc4ea2bf8e4554b9341cf917449311ada2ece9066a2"} Mar 18 09:07:09.425822 master-0 kubenswrapper[26053]: I0318 09:07:09.425777 26053 generic.go:334] "Generic (PLEG): container finished" podID="62b54392-c01e-4d5d-9245-0a8ec8ff800b" containerID="92168d9d7c54ee0acedf12086cb0af9b9a961c385d5b23627bf9d819aaca1822" exitCode=0 Mar 18 09:07:09.425880 master-0 kubenswrapper[26053]: I0318 09:07:09.425827 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" event={"ID":"62b54392-c01e-4d5d-9245-0a8ec8ff800b","Type":"ContainerDied","Data":"92168d9d7c54ee0acedf12086cb0af9b9a961c385d5b23627bf9d819aaca1822"} Mar 18 09:07:10.045480 master-0 kubenswrapper[26053]: I0318 09:07:10.045442 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4" Mar 18 09:07:10.152584 master-0 kubenswrapper[26053]: I0318 09:07:10.148935 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01225b27-7346-42c4-82c8-41dd15efbce6-config\") pod \"01225b27-7346-42c4-82c8-41dd15efbce6\" (UID: \"01225b27-7346-42c4-82c8-41dd15efbce6\") " Mar 18 09:07:10.152584 master-0 kubenswrapper[26053]: I0318 09:07:10.149021 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f7k5\" (UniqueName: \"kubernetes.io/projected/01225b27-7346-42c4-82c8-41dd15efbce6-kube-api-access-2f7k5\") pod \"01225b27-7346-42c4-82c8-41dd15efbce6\" (UID: \"01225b27-7346-42c4-82c8-41dd15efbce6\") " Mar 18 09:07:10.152584 master-0 kubenswrapper[26053]: I0318 09:07:10.149117 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01225b27-7346-42c4-82c8-41dd15efbce6-client-ca\") pod \"01225b27-7346-42c4-82c8-41dd15efbce6\" (UID: \"01225b27-7346-42c4-82c8-41dd15efbce6\") " Mar 18 09:07:10.152584 master-0 kubenswrapper[26053]: I0318 09:07:10.149144 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01225b27-7346-42c4-82c8-41dd15efbce6-serving-cert\") pod \"01225b27-7346-42c4-82c8-41dd15efbce6\" (UID: \"01225b27-7346-42c4-82c8-41dd15efbce6\") " Mar 18 09:07:10.154114 master-0 kubenswrapper[26053]: I0318 09:07:10.154063 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01225b27-7346-42c4-82c8-41dd15efbce6-client-ca" (OuterVolumeSpecName: "client-ca") pod "01225b27-7346-42c4-82c8-41dd15efbce6" (UID: "01225b27-7346-42c4-82c8-41dd15efbce6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:07:10.154199 master-0 kubenswrapper[26053]: I0318 09:07:10.154166 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01225b27-7346-42c4-82c8-41dd15efbce6-config" (OuterVolumeSpecName: "config") pod "01225b27-7346-42c4-82c8-41dd15efbce6" (UID: "01225b27-7346-42c4-82c8-41dd15efbce6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:07:10.154882 master-0 kubenswrapper[26053]: I0318 09:07:10.154830 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01225b27-7346-42c4-82c8-41dd15efbce6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01225b27-7346-42c4-82c8-41dd15efbce6" (UID: "01225b27-7346-42c4-82c8-41dd15efbce6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:07:10.155431 master-0 kubenswrapper[26053]: I0318 09:07:10.155158 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01225b27-7346-42c4-82c8-41dd15efbce6-kube-api-access-2f7k5" (OuterVolumeSpecName: "kube-api-access-2f7k5") pod "01225b27-7346-42c4-82c8-41dd15efbce6" (UID: "01225b27-7346-42c4-82c8-41dd15efbce6"). InnerVolumeSpecName "kube-api-access-2f7k5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:07:10.210338 master-0 kubenswrapper[26053]: I0318 09:07:10.210298 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" Mar 18 09:07:10.250202 master-0 kubenswrapper[26053]: I0318 09:07:10.250052 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62b54392-c01e-4d5d-9245-0a8ec8ff800b-proxy-ca-bundles\") pod \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\" (UID: \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\") " Mar 18 09:07:10.250861 master-0 kubenswrapper[26053]: I0318 09:07:10.250621 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62b54392-c01e-4d5d-9245-0a8ec8ff800b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "62b54392-c01e-4d5d-9245-0a8ec8ff800b" (UID: "62b54392-c01e-4d5d-9245-0a8ec8ff800b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:07:10.250861 master-0 kubenswrapper[26053]: I0318 09:07:10.250742 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62b54392-c01e-4d5d-9245-0a8ec8ff800b-serving-cert\") pod \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\" (UID: \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\") " Mar 18 09:07:10.250978 master-0 kubenswrapper[26053]: I0318 09:07:10.250915 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62b54392-c01e-4d5d-9245-0a8ec8ff800b-client-ca\") pod \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\" (UID: \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\") " Mar 18 09:07:10.251658 master-0 kubenswrapper[26053]: I0318 09:07:10.251539 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62b54392-c01e-4d5d-9245-0a8ec8ff800b-client-ca" (OuterVolumeSpecName: "client-ca") pod "62b54392-c01e-4d5d-9245-0a8ec8ff800b" (UID: "62b54392-c01e-4d5d-9245-0a8ec8ff800b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:07:10.251743 master-0 kubenswrapper[26053]: I0318 09:07:10.251702 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62b54392-c01e-4d5d-9245-0a8ec8ff800b-config\") pod \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\" (UID: \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\") " Mar 18 09:07:10.251743 master-0 kubenswrapper[26053]: I0318 09:07:10.251737 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n4hw\" (UniqueName: \"kubernetes.io/projected/62b54392-c01e-4d5d-9245-0a8ec8ff800b-kube-api-access-5n4hw\") pod \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\" (UID: \"62b54392-c01e-4d5d-9245-0a8ec8ff800b\") " Mar 18 09:07:10.252256 master-0 kubenswrapper[26053]: I0318 09:07:10.252209 26053 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62b54392-c01e-4d5d-9245-0a8ec8ff800b-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:10.252256 master-0 kubenswrapper[26053]: I0318 09:07:10.252233 26053 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01225b27-7346-42c4-82c8-41dd15efbce6-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:10.252445 master-0 kubenswrapper[26053]: I0318 09:07:10.252272 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f7k5\" (UniqueName: \"kubernetes.io/projected/01225b27-7346-42c4-82c8-41dd15efbce6-kube-api-access-2f7k5\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:10.252445 master-0 kubenswrapper[26053]: I0318 09:07:10.252308 26053 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01225b27-7346-42c4-82c8-41dd15efbce6-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:10.252445 master-0 kubenswrapper[26053]: I0318 09:07:10.252321 26053 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01225b27-7346-42c4-82c8-41dd15efbce6-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:10.253177 master-0 kubenswrapper[26053]: I0318 09:07:10.253129 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62b54392-c01e-4d5d-9245-0a8ec8ff800b-config" (OuterVolumeSpecName: "config") pod "62b54392-c01e-4d5d-9245-0a8ec8ff800b" (UID: "62b54392-c01e-4d5d-9245-0a8ec8ff800b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:07:10.253177 master-0 kubenswrapper[26053]: I0318 09:07:10.253155 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62b54392-c01e-4d5d-9245-0a8ec8ff800b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "62b54392-c01e-4d5d-9245-0a8ec8ff800b" (UID: "62b54392-c01e-4d5d-9245-0a8ec8ff800b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:07:10.256521 master-0 kubenswrapper[26053]: I0318 09:07:10.256486 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62b54392-c01e-4d5d-9245-0a8ec8ff800b-kube-api-access-5n4hw" (OuterVolumeSpecName: "kube-api-access-5n4hw") pod "62b54392-c01e-4d5d-9245-0a8ec8ff800b" (UID: "62b54392-c01e-4d5d-9245-0a8ec8ff800b"). InnerVolumeSpecName "kube-api-access-5n4hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:07:10.353674 master-0 kubenswrapper[26053]: I0318 09:07:10.353612 26053 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62b54392-c01e-4d5d-9245-0a8ec8ff800b-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:10.353674 master-0 kubenswrapper[26053]: I0318 09:07:10.353658 26053 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62b54392-c01e-4d5d-9245-0a8ec8ff800b-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:10.353674 master-0 kubenswrapper[26053]: I0318 09:07:10.353674 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n4hw\" (UniqueName: \"kubernetes.io/projected/62b54392-c01e-4d5d-9245-0a8ec8ff800b-kube-api-access-5n4hw\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:10.353948 master-0 kubenswrapper[26053]: I0318 09:07:10.353689 26053 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62b54392-c01e-4d5d-9245-0a8ec8ff800b-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:10.438224 master-0 kubenswrapper[26053]: I0318 09:07:10.438143 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" event={"ID":"62b54392-c01e-4d5d-9245-0a8ec8ff800b","Type":"ContainerDied","Data":"bb156d03f8475f40ee0b52dbceb3b772b1e72f244f53f64bf9f5da9236b4a6bf"} Mar 18 09:07:10.438224 master-0 kubenswrapper[26053]: I0318 09:07:10.438221 26053 scope.go:117] "RemoveContainer" containerID="92168d9d7c54ee0acedf12086cb0af9b9a961c385d5b23627bf9d819aaca1822" Mar 18 09:07:10.438588 master-0 kubenswrapper[26053]: I0318 09:07:10.438264 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-688ff857d6-jwr6g" Mar 18 09:07:10.446039 master-0 kubenswrapper[26053]: I0318 09:07:10.445910 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4" event={"ID":"01225b27-7346-42c4-82c8-41dd15efbce6","Type":"ContainerDied","Data":"30fafde9c721c27794681988c6ac40fac0c4dfc0bf2ee68b21bc0fd510c2142d"} Mar 18 09:07:10.446039 master-0 kubenswrapper[26053]: I0318 09:07:10.446010 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4" Mar 18 09:07:10.472556 master-0 kubenswrapper[26053]: I0318 09:07:10.472481 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67"] Mar 18 09:07:10.473536 master-0 kubenswrapper[26053]: E0318 09:07:10.472949 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" containerName="startup-monitor" Mar 18 09:07:10.473536 master-0 kubenswrapper[26053]: I0318 09:07:10.472989 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" containerName="startup-monitor" Mar 18 09:07:10.473536 master-0 kubenswrapper[26053]: E0318 09:07:10.473080 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01225b27-7346-42c4-82c8-41dd15efbce6" containerName="route-controller-manager" Mar 18 09:07:10.473536 master-0 kubenswrapper[26053]: I0318 09:07:10.473101 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="01225b27-7346-42c4-82c8-41dd15efbce6" containerName="route-controller-manager" Mar 18 09:07:10.473536 master-0 kubenswrapper[26053]: E0318 09:07:10.473126 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62b54392-c01e-4d5d-9245-0a8ec8ff800b" containerName="controller-manager" Mar 18 09:07:10.473536 master-0 kubenswrapper[26053]: I0318 09:07:10.473139 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="62b54392-c01e-4d5d-9245-0a8ec8ff800b" containerName="controller-manager" Mar 18 09:07:10.473536 master-0 kubenswrapper[26053]: I0318 09:07:10.473343 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="62b54392-c01e-4d5d-9245-0a8ec8ff800b" containerName="controller-manager" Mar 18 09:07:10.473536 master-0 kubenswrapper[26053]: I0318 09:07:10.473373 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="16fb4ea7f83036d9c6adf3454fc7e9db" containerName="startup-monitor" Mar 18 09:07:10.473536 master-0 kubenswrapper[26053]: I0318 09:07:10.473458 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="01225b27-7346-42c4-82c8-41dd15efbce6" containerName="route-controller-manager" Mar 18 09:07:10.474658 master-0 kubenswrapper[26053]: I0318 09:07:10.474218 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67" Mar 18 09:07:10.477737 master-0 kubenswrapper[26053]: I0318 09:07:10.477678 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-8zgz4" Mar 18 09:07:10.481484 master-0 kubenswrapper[26053]: I0318 09:07:10.481013 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 09:07:10.481484 master-0 kubenswrapper[26053]: I0318 09:07:10.481345 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 09:07:10.481484 master-0 kubenswrapper[26053]: I0318 09:07:10.481409 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 09:07:10.481824 master-0 kubenswrapper[26053]: I0318 09:07:10.481602 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 09:07:10.481824 master-0 kubenswrapper[26053]: I0318 09:07:10.481637 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 09:07:10.513094 master-0 kubenswrapper[26053]: I0318 09:07:10.497344 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp"] Mar 18 09:07:10.513094 master-0 kubenswrapper[26053]: I0318 09:07:10.498664 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" Mar 18 09:07:10.513094 master-0 kubenswrapper[26053]: I0318 09:07:10.499227 26053 scope.go:117] "RemoveContainer" containerID="0936c170059288a289391bc4ea2bf8e4554b9341cf917449311ada2ece9066a2" Mar 18 09:07:10.513094 master-0 kubenswrapper[26053]: I0318 09:07:10.503868 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 09:07:10.513094 master-0 kubenswrapper[26053]: I0318 09:07:10.504532 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 09:07:10.513094 master-0 kubenswrapper[26053]: I0318 09:07:10.504689 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 09:07:10.513094 master-0 kubenswrapper[26053]: I0318 09:07:10.504777 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-rwvl6" Mar 18 09:07:10.513094 master-0 kubenswrapper[26053]: I0318 09:07:10.511119 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 09:07:10.513094 master-0 kubenswrapper[26053]: I0318 09:07:10.512387 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 09:07:10.514050 master-0 kubenswrapper[26053]: I0318 09:07:10.514016 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 09:07:10.516665 master-0 kubenswrapper[26053]: I0318 09:07:10.516617 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67"] Mar 18 09:07:10.522769 master-0 kubenswrapper[26053]: I0318 09:07:10.522728 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp"] Mar 18 09:07:10.526913 master-0 kubenswrapper[26053]: I0318 09:07:10.526875 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-688ff857d6-jwr6g"] Mar 18 09:07:10.531141 master-0 kubenswrapper[26053]: I0318 09:07:10.531077 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-688ff857d6-jwr6g"] Mar 18 09:07:10.555731 master-0 kubenswrapper[26053]: I0318 09:07:10.555541 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-config\") pod \"controller-manager-5f7d7c66b9-rmtxp\" (UID: \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\") " pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" Mar 18 09:07:10.555731 master-0 kubenswrapper[26053]: I0318 09:07:10.555703 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a892bd16-5671-47dc-96dd-f82bbe71739f-serving-cert\") pod \"route-controller-manager-65f5f9559d-c7h67\" (UID: \"a892bd16-5671-47dc-96dd-f82bbe71739f\") " pod="openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67" Mar 18 09:07:10.556021 master-0 kubenswrapper[26053]: I0318 09:07:10.555761 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-client-ca\") pod \"controller-manager-5f7d7c66b9-rmtxp\" (UID: \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\") " pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" Mar 18 09:07:10.556021 master-0 kubenswrapper[26053]: I0318 09:07:10.555799 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-proxy-ca-bundles\") pod \"controller-manager-5f7d7c66b9-rmtxp\" (UID: \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\") " pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" Mar 18 09:07:10.556021 master-0 kubenswrapper[26053]: I0318 09:07:10.555846 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qspxm\" (UniqueName: \"kubernetes.io/projected/a892bd16-5671-47dc-96dd-f82bbe71739f-kube-api-access-qspxm\") pod \"route-controller-manager-65f5f9559d-c7h67\" (UID: \"a892bd16-5671-47dc-96dd-f82bbe71739f\") " pod="openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67" Mar 18 09:07:10.556021 master-0 kubenswrapper[26053]: I0318 09:07:10.555872 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a892bd16-5671-47dc-96dd-f82bbe71739f-config\") pod \"route-controller-manager-65f5f9559d-c7h67\" (UID: \"a892bd16-5671-47dc-96dd-f82bbe71739f\") " pod="openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67" Mar 18 09:07:10.556021 master-0 kubenswrapper[26053]: I0318 09:07:10.555919 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm9tc\" (UniqueName: \"kubernetes.io/projected/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-kube-api-access-hm9tc\") pod \"controller-manager-5f7d7c66b9-rmtxp\" (UID: \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\") " pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" Mar 18 09:07:10.556021 master-0 kubenswrapper[26053]: I0318 09:07:10.555940 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-serving-cert\") pod \"controller-manager-5f7d7c66b9-rmtxp\" (UID: \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\") " pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" Mar 18 09:07:10.556291 master-0 kubenswrapper[26053]: I0318 09:07:10.556067 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a892bd16-5671-47dc-96dd-f82bbe71739f-client-ca\") pod \"route-controller-manager-65f5f9559d-c7h67\" (UID: \"a892bd16-5671-47dc-96dd-f82bbe71739f\") " pod="openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67" Mar 18 09:07:10.592737 master-0 kubenswrapper[26053]: I0318 09:07:10.592669 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4"] Mar 18 09:07:10.596850 master-0 kubenswrapper[26053]: I0318 09:07:10.596799 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f6764dfff-sj6m4"] Mar 18 09:07:10.657509 master-0 kubenswrapper[26053]: I0318 09:07:10.657437 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm9tc\" (UniqueName: \"kubernetes.io/projected/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-kube-api-access-hm9tc\") pod \"controller-manager-5f7d7c66b9-rmtxp\" (UID: \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\") " pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" Mar 18 09:07:10.657509 master-0 kubenswrapper[26053]: I0318 09:07:10.657498 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-serving-cert\") pod \"controller-manager-5f7d7c66b9-rmtxp\" (UID: \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\") " pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" Mar 18 09:07:10.657816 master-0 kubenswrapper[26053]: I0318 09:07:10.657557 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a892bd16-5671-47dc-96dd-f82bbe71739f-client-ca\") pod \"route-controller-manager-65f5f9559d-c7h67\" (UID: \"a892bd16-5671-47dc-96dd-f82bbe71739f\") " pod="openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67" Mar 18 09:07:10.657816 master-0 kubenswrapper[26053]: I0318 09:07:10.657614 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-config\") pod \"controller-manager-5f7d7c66b9-rmtxp\" (UID: \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\") " pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" Mar 18 09:07:10.657816 master-0 kubenswrapper[26053]: I0318 09:07:10.657637 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a892bd16-5671-47dc-96dd-f82bbe71739f-serving-cert\") pod \"route-controller-manager-65f5f9559d-c7h67\" (UID: \"a892bd16-5671-47dc-96dd-f82bbe71739f\") " pod="openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67" Mar 18 09:07:10.657816 master-0 kubenswrapper[26053]: I0318 09:07:10.657766 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-client-ca\") pod \"controller-manager-5f7d7c66b9-rmtxp\" (UID: \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\") " pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" Mar 18 09:07:10.657988 master-0 kubenswrapper[26053]: I0318 09:07:10.657801 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-proxy-ca-bundles\") pod \"controller-manager-5f7d7c66b9-rmtxp\" (UID: \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\") " pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" Mar 18 09:07:10.657988 master-0 kubenswrapper[26053]: I0318 09:07:10.657888 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qspxm\" (UniqueName: \"kubernetes.io/projected/a892bd16-5671-47dc-96dd-f82bbe71739f-kube-api-access-qspxm\") pod \"route-controller-manager-65f5f9559d-c7h67\" (UID: \"a892bd16-5671-47dc-96dd-f82bbe71739f\") " pod="openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67" Mar 18 09:07:10.659019 master-0 kubenswrapper[26053]: I0318 09:07:10.658963 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a892bd16-5671-47dc-96dd-f82bbe71739f-config\") pod \"route-controller-manager-65f5f9559d-c7h67\" (UID: \"a892bd16-5671-47dc-96dd-f82bbe71739f\") " pod="openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67" Mar 18 09:07:10.659188 master-0 kubenswrapper[26053]: I0318 09:07:10.659140 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-client-ca\") pod \"controller-manager-5f7d7c66b9-rmtxp\" (UID: \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\") " pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" Mar 18 09:07:10.659457 master-0 kubenswrapper[26053]: I0318 09:07:10.659414 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a892bd16-5671-47dc-96dd-f82bbe71739f-client-ca\") pod \"route-controller-manager-65f5f9559d-c7h67\" (UID: \"a892bd16-5671-47dc-96dd-f82bbe71739f\") " pod="openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67" Mar 18 09:07:10.659877 master-0 kubenswrapper[26053]: I0318 09:07:10.659826 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-config\") pod \"controller-manager-5f7d7c66b9-rmtxp\" (UID: \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\") " pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" Mar 18 09:07:10.661545 master-0 kubenswrapper[26053]: I0318 09:07:10.660720 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-proxy-ca-bundles\") pod \"controller-manager-5f7d7c66b9-rmtxp\" (UID: \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\") " pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" Mar 18 09:07:10.661545 master-0 kubenswrapper[26053]: I0318 09:07:10.661056 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a892bd16-5671-47dc-96dd-f82bbe71739f-config\") pod \"route-controller-manager-65f5f9559d-c7h67\" (UID: \"a892bd16-5671-47dc-96dd-f82bbe71739f\") " pod="openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67" Mar 18 09:07:10.663717 master-0 kubenswrapper[26053]: I0318 09:07:10.663663 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a892bd16-5671-47dc-96dd-f82bbe71739f-serving-cert\") pod \"route-controller-manager-65f5f9559d-c7h67\" (UID: \"a892bd16-5671-47dc-96dd-f82bbe71739f\") " pod="openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67" Mar 18 09:07:10.664342 master-0 kubenswrapper[26053]: I0318 09:07:10.664290 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-serving-cert\") pod \"controller-manager-5f7d7c66b9-rmtxp\" (UID: \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\") " pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" Mar 18 09:07:10.679120 master-0 kubenswrapper[26053]: I0318 09:07:10.679084 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm9tc\" (UniqueName: \"kubernetes.io/projected/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-kube-api-access-hm9tc\") pod \"controller-manager-5f7d7c66b9-rmtxp\" (UID: \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\") " pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" Mar 18 09:07:10.689304 master-0 kubenswrapper[26053]: I0318 09:07:10.689244 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qspxm\" (UniqueName: \"kubernetes.io/projected/a892bd16-5671-47dc-96dd-f82bbe71739f-kube-api-access-qspxm\") pod \"route-controller-manager-65f5f9559d-c7h67\" (UID: \"a892bd16-5671-47dc-96dd-f82bbe71739f\") " pod="openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67" Mar 18 09:07:10.743852 master-0 kubenswrapper[26053]: I0318 09:07:10.743795 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01225b27-7346-42c4-82c8-41dd15efbce6" path="/var/lib/kubelet/pods/01225b27-7346-42c4-82c8-41dd15efbce6/volumes" Mar 18 09:07:10.745206 master-0 kubenswrapper[26053]: I0318 09:07:10.745162 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62b54392-c01e-4d5d-9245-0a8ec8ff800b" path="/var/lib/kubelet/pods/62b54392-c01e-4d5d-9245-0a8ec8ff800b/volumes" Mar 18 09:07:10.854978 master-0 kubenswrapper[26053]: I0318 09:07:10.854842 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67" Mar 18 09:07:10.875872 master-0 kubenswrapper[26053]: I0318 09:07:10.875790 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" Mar 18 09:07:11.356765 master-0 kubenswrapper[26053]: I0318 09:07:11.353964 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67"] Mar 18 09:07:11.358656 master-0 kubenswrapper[26053]: W0318 09:07:11.357702 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda892bd16_5671_47dc_96dd_f82bbe71739f.slice/crio-81f61629142f9cc55e73284204979ded145e427c2f958381159bcec88c05b8a0 WatchSource:0}: Error finding container 81f61629142f9cc55e73284204979ded145e427c2f958381159bcec88c05b8a0: Status 404 returned error can't find the container with id 81f61629142f9cc55e73284204979ded145e427c2f958381159bcec88c05b8a0 Mar 18 09:07:11.430417 master-0 kubenswrapper[26053]: I0318 09:07:11.430315 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp"] Mar 18 09:07:11.434943 master-0 kubenswrapper[26053]: W0318 09:07:11.434886 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a988b0d_7da8_4b03_9d05_996e2ce76fa7.slice/crio-ffdb7e52cb99048623d40657a8a5d99f9698550b54514b26ebc2732b1e6425fe WatchSource:0}: Error finding container ffdb7e52cb99048623d40657a8a5d99f9698550b54514b26ebc2732b1e6425fe: Status 404 returned error can't find the container with id ffdb7e52cb99048623d40657a8a5d99f9698550b54514b26ebc2732b1e6425fe Mar 18 09:07:11.455794 master-0 kubenswrapper[26053]: I0318 09:07:11.455716 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67" event={"ID":"a892bd16-5671-47dc-96dd-f82bbe71739f","Type":"ContainerStarted","Data":"81f61629142f9cc55e73284204979ded145e427c2f958381159bcec88c05b8a0"} Mar 18 09:07:11.458881 master-0 kubenswrapper[26053]: I0318 09:07:11.458832 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" event={"ID":"2a988b0d-7da8-4b03-9d05-996e2ce76fa7","Type":"ContainerStarted","Data":"ffdb7e52cb99048623d40657a8a5d99f9698550b54514b26ebc2732b1e6425fe"} Mar 18 09:07:12.467162 master-0 kubenswrapper[26053]: I0318 09:07:12.467108 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67" event={"ID":"a892bd16-5671-47dc-96dd-f82bbe71739f","Type":"ContainerStarted","Data":"1df5000054a78f0f67fb3c03509e277e54e86d09de4d9763061eb1ac14332924"} Mar 18 09:07:12.468140 master-0 kubenswrapper[26053]: I0318 09:07:12.468118 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67" Mar 18 09:07:12.469321 master-0 kubenswrapper[26053]: I0318 09:07:12.469299 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" event={"ID":"2a988b0d-7da8-4b03-9d05-996e2ce76fa7","Type":"ContainerStarted","Data":"80767bbf40a264e6d24ae8b3b5969b77b4b535b99177b2ab99ec6cf8a14bdd49"} Mar 18 09:07:12.469552 master-0 kubenswrapper[26053]: I0318 09:07:12.469498 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" Mar 18 09:07:12.476317 master-0 kubenswrapper[26053]: I0318 09:07:12.476263 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" Mar 18 09:07:12.481578 master-0 kubenswrapper[26053]: I0318 09:07:12.481503 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67" Mar 18 09:07:12.495682 master-0 kubenswrapper[26053]: I0318 09:07:12.494118 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67" podStartSLOduration=3.494103024 podStartE2EDuration="3.494103024s" podCreationTimestamp="2026-03-18 09:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:07:12.491871399 +0000 UTC m=+219.985222780" watchObservedRunningTime="2026-03-18 09:07:12.494103024 +0000 UTC m=+219.987454405" Mar 18 09:07:12.588375 master-0 kubenswrapper[26053]: I0318 09:07:12.588311 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" podStartSLOduration=3.588294017 podStartE2EDuration="3.588294017s" podCreationTimestamp="2026-03-18 09:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:07:12.585866156 +0000 UTC m=+220.079217537" watchObservedRunningTime="2026-03-18 09:07:12.588294017 +0000 UTC m=+220.081645398" Mar 18 09:07:15.443420 master-0 kubenswrapper[26053]: I0318 09:07:15.443284 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-546755554c-h5vql" podUID="7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" containerName="console" containerID="cri-o://99bd87a4df57cc9b46f4cab72821451bcd3d21345fc4e97784fe91fbcd19f610" gracePeriod=15 Mar 18 09:07:15.974351 master-0 kubenswrapper[26053]: I0318 09:07:15.974308 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-546755554c-h5vql_7dec975e-18dd-4f13-ac8b-56d9fca1c1f7/console/0.log" Mar 18 09:07:15.974536 master-0 kubenswrapper[26053]: I0318 09:07:15.974372 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-546755554c-h5vql" Mar 18 09:07:16.072590 master-0 kubenswrapper[26053]: I0318 09:07:16.065968 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-oauth-serving-cert\") pod \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " Mar 18 09:07:16.072590 master-0 kubenswrapper[26053]: I0318 09:07:16.066045 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-service-ca\") pod \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " Mar 18 09:07:16.072590 master-0 kubenswrapper[26053]: I0318 09:07:16.066097 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmgc5\" (UniqueName: \"kubernetes.io/projected/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-kube-api-access-wmgc5\") pod \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " Mar 18 09:07:16.072590 master-0 kubenswrapper[26053]: I0318 09:07:16.066139 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-trusted-ca-bundle\") pod \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " Mar 18 09:07:16.072590 master-0 kubenswrapper[26053]: I0318 09:07:16.066159 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-console-config\") pod \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " Mar 18 09:07:16.072590 master-0 kubenswrapper[26053]: I0318 09:07:16.066191 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-console-oauth-config\") pod \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " Mar 18 09:07:16.072590 master-0 kubenswrapper[26053]: I0318 09:07:16.066212 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-console-serving-cert\") pod \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\" (UID: \"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7\") " Mar 18 09:07:16.072590 master-0 kubenswrapper[26053]: I0318 09:07:16.071286 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-console-config" (OuterVolumeSpecName: "console-config") pod "7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" (UID: "7dec975e-18dd-4f13-ac8b-56d9fca1c1f7"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:07:16.072590 master-0 kubenswrapper[26053]: I0318 09:07:16.072151 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" (UID: "7dec975e-18dd-4f13-ac8b-56d9fca1c1f7"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:07:16.081588 master-0 kubenswrapper[26053]: I0318 09:07:16.073619 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" (UID: "7dec975e-18dd-4f13-ac8b-56d9fca1c1f7"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:07:16.081588 master-0 kubenswrapper[26053]: I0318 09:07:16.073667 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" (UID: "7dec975e-18dd-4f13-ac8b-56d9fca1c1f7"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:07:16.081588 master-0 kubenswrapper[26053]: I0318 09:07:16.073897 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-service-ca" (OuterVolumeSpecName: "service-ca") pod "7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" (UID: "7dec975e-18dd-4f13-ac8b-56d9fca1c1f7"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:07:16.095751 master-0 kubenswrapper[26053]: I0318 09:07:16.090824 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-kube-api-access-wmgc5" (OuterVolumeSpecName: "kube-api-access-wmgc5") pod "7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" (UID: "7dec975e-18dd-4f13-ac8b-56d9fca1c1f7"). InnerVolumeSpecName "kube-api-access-wmgc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:07:16.095751 master-0 kubenswrapper[26053]: I0318 09:07:16.091768 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" (UID: "7dec975e-18dd-4f13-ac8b-56d9fca1c1f7"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:07:16.167465 master-0 kubenswrapper[26053]: I0318 09:07:16.167390 26053 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:16.167465 master-0 kubenswrapper[26053]: I0318 09:07:16.167461 26053 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:16.167934 master-0 kubenswrapper[26053]: I0318 09:07:16.167476 26053 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:16.167934 master-0 kubenswrapper[26053]: I0318 09:07:16.167509 26053 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:16.167934 master-0 kubenswrapper[26053]: I0318 09:07:16.167522 26053 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:16.167934 master-0 kubenswrapper[26053]: I0318 09:07:16.167534 26053 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:16.167934 master-0 kubenswrapper[26053]: I0318 09:07:16.167545 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmgc5\" (UniqueName: \"kubernetes.io/projected/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7-kube-api-access-wmgc5\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:16.486665 master-0 kubenswrapper[26053]: I0318 09:07:16.486602 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Mar 18 09:07:16.487205 master-0 kubenswrapper[26053]: E0318 09:07:16.487094 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" containerName="console" Mar 18 09:07:16.487205 master-0 kubenswrapper[26053]: I0318 09:07:16.487111 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" containerName="console" Mar 18 09:07:16.487448 master-0 kubenswrapper[26053]: I0318 09:07:16.487423 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" containerName="console" Mar 18 09:07:16.488182 master-0 kubenswrapper[26053]: I0318 09:07:16.488155 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:07:16.492608 master-0 kubenswrapper[26053]: I0318 09:07:16.492535 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-v6rhb" Mar 18 09:07:16.495811 master-0 kubenswrapper[26053]: I0318 09:07:16.495719 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 09:07:16.524508 master-0 kubenswrapper[26053]: I0318 09:07:16.523099 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-546755554c-h5vql_7dec975e-18dd-4f13-ac8b-56d9fca1c1f7/console/0.log" Mar 18 09:07:16.524508 master-0 kubenswrapper[26053]: I0318 09:07:16.523184 26053 generic.go:334] "Generic (PLEG): container finished" podID="7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" containerID="99bd87a4df57cc9b46f4cab72821451bcd3d21345fc4e97784fe91fbcd19f610" exitCode=2 Mar 18 09:07:16.524508 master-0 kubenswrapper[26053]: I0318 09:07:16.523247 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-546755554c-h5vql" event={"ID":"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7","Type":"ContainerDied","Data":"99bd87a4df57cc9b46f4cab72821451bcd3d21345fc4e97784fe91fbcd19f610"} Mar 18 09:07:16.524508 master-0 kubenswrapper[26053]: I0318 09:07:16.523282 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-546755554c-h5vql" Mar 18 09:07:16.524508 master-0 kubenswrapper[26053]: I0318 09:07:16.523341 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-546755554c-h5vql" event={"ID":"7dec975e-18dd-4f13-ac8b-56d9fca1c1f7","Type":"ContainerDied","Data":"5c4074685f6a68d304a6c74d54b4b2169802ed0ee9c82f481051d37f2810081f"} Mar 18 09:07:16.524508 master-0 kubenswrapper[26053]: I0318 09:07:16.523366 26053 scope.go:117] "RemoveContainer" containerID="99bd87a4df57cc9b46f4cab72821451bcd3d21345fc4e97784fe91fbcd19f610" Mar 18 09:07:16.547192 master-0 kubenswrapper[26053]: I0318 09:07:16.544486 26053 scope.go:117] "RemoveContainer" containerID="99bd87a4df57cc9b46f4cab72821451bcd3d21345fc4e97784fe91fbcd19f610" Mar 18 09:07:16.547192 master-0 kubenswrapper[26053]: E0318 09:07:16.545058 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99bd87a4df57cc9b46f4cab72821451bcd3d21345fc4e97784fe91fbcd19f610\": container with ID starting with 99bd87a4df57cc9b46f4cab72821451bcd3d21345fc4e97784fe91fbcd19f610 not found: ID does not exist" containerID="99bd87a4df57cc9b46f4cab72821451bcd3d21345fc4e97784fe91fbcd19f610" Mar 18 09:07:16.547192 master-0 kubenswrapper[26053]: I0318 09:07:16.545091 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99bd87a4df57cc9b46f4cab72821451bcd3d21345fc4e97784fe91fbcd19f610"} err="failed to get container status \"99bd87a4df57cc9b46f4cab72821451bcd3d21345fc4e97784fe91fbcd19f610\": rpc error: code = NotFound desc = could not find container \"99bd87a4df57cc9b46f4cab72821451bcd3d21345fc4e97784fe91fbcd19f610\": container with ID starting with 99bd87a4df57cc9b46f4cab72821451bcd3d21345fc4e97784fe91fbcd19f610 not found: ID does not exist" Mar 18 09:07:16.555900 master-0 kubenswrapper[26053]: I0318 09:07:16.550015 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Mar 18 09:07:16.572514 master-0 kubenswrapper[26053]: I0318 09:07:16.572341 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7dcc6db5-f20e-431f-9f0b-818bd3830f41-kube-api-access\") pod \"installer-5-master-0\" (UID: \"7dcc6db5-f20e-431f-9f0b-818bd3830f41\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:07:16.572514 master-0 kubenswrapper[26053]: I0318 09:07:16.572419 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dcc6db5-f20e-431f-9f0b-818bd3830f41-var-lock\") pod \"installer-5-master-0\" (UID: \"7dcc6db5-f20e-431f-9f0b-818bd3830f41\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:07:16.572782 master-0 kubenswrapper[26053]: I0318 09:07:16.572724 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7dcc6db5-f20e-431f-9f0b-818bd3830f41-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"7dcc6db5-f20e-431f-9f0b-818bd3830f41\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:07:16.575257 master-0 kubenswrapper[26053]: I0318 09:07:16.575205 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-546755554c-h5vql"] Mar 18 09:07:16.582310 master-0 kubenswrapper[26053]: I0318 09:07:16.582246 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-546755554c-h5vql"] Mar 18 09:07:16.674604 master-0 kubenswrapper[26053]: I0318 09:07:16.674507 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7dcc6db5-f20e-431f-9f0b-818bd3830f41-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"7dcc6db5-f20e-431f-9f0b-818bd3830f41\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:07:16.674906 master-0 kubenswrapper[26053]: I0318 09:07:16.674638 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7dcc6db5-f20e-431f-9f0b-818bd3830f41-kube-api-access\") pod \"installer-5-master-0\" (UID: \"7dcc6db5-f20e-431f-9f0b-818bd3830f41\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:07:16.674906 master-0 kubenswrapper[26053]: I0318 09:07:16.674674 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7dcc6db5-f20e-431f-9f0b-818bd3830f41-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"7dcc6db5-f20e-431f-9f0b-818bd3830f41\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:07:16.674906 master-0 kubenswrapper[26053]: I0318 09:07:16.674731 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dcc6db5-f20e-431f-9f0b-818bd3830f41-var-lock\") pod \"installer-5-master-0\" (UID: \"7dcc6db5-f20e-431f-9f0b-818bd3830f41\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:07:16.674906 master-0 kubenswrapper[26053]: I0318 09:07:16.674683 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dcc6db5-f20e-431f-9f0b-818bd3830f41-var-lock\") pod \"installer-5-master-0\" (UID: \"7dcc6db5-f20e-431f-9f0b-818bd3830f41\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:07:16.691720 master-0 kubenswrapper[26053]: I0318 09:07:16.691676 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7dcc6db5-f20e-431f-9f0b-818bd3830f41-kube-api-access\") pod \"installer-5-master-0\" (UID: \"7dcc6db5-f20e-431f-9f0b-818bd3830f41\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:07:16.741117 master-0 kubenswrapper[26053]: I0318 09:07:16.740863 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" path="/var/lib/kubelet/pods/7dec975e-18dd-4f13-ac8b-56d9fca1c1f7/volumes" Mar 18 09:07:16.813641 master-0 kubenswrapper[26053]: I0318 09:07:16.813333 26053 patch_prober.go:28] interesting pod/console-546755554c-h5vql container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: i/o timeout" start-of-body= Mar 18 09:07:16.813641 master-0 kubenswrapper[26053]: I0318 09:07:16.813483 26053 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-546755554c-h5vql" podUID="7dec975e-18dd-4f13-ac8b-56d9fca1c1f7" containerName="console" probeResult="failure" output="Get \"https://10.128.0.86:8443/health\": dial tcp 10.128.0.86:8443: i/o timeout" Mar 18 09:07:16.839521 master-0 kubenswrapper[26053]: I0318 09:07:16.839465 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:07:17.364962 master-0 kubenswrapper[26053]: I0318 09:07:17.364894 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Mar 18 09:07:17.370185 master-0 kubenswrapper[26053]: W0318 09:07:17.370128 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7dcc6db5_f20e_431f_9f0b_818bd3830f41.slice/crio-36b456f37b2d26d7504619d17c9a22bdabf2d287e74babcf44fa7fce2a0bee98 WatchSource:0}: Error finding container 36b456f37b2d26d7504619d17c9a22bdabf2d287e74babcf44fa7fce2a0bee98: Status 404 returned error can't find the container with id 36b456f37b2d26d7504619d17c9a22bdabf2d287e74babcf44fa7fce2a0bee98 Mar 18 09:07:17.531340 master-0 kubenswrapper[26053]: I0318 09:07:17.531252 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"7dcc6db5-f20e-431f-9f0b-818bd3830f41","Type":"ContainerStarted","Data":"36b456f37b2d26d7504619d17c9a22bdabf2d287e74babcf44fa7fce2a0bee98"} Mar 18 09:07:18.546040 master-0 kubenswrapper[26053]: I0318 09:07:18.545949 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"7dcc6db5-f20e-431f-9f0b-818bd3830f41","Type":"ContainerStarted","Data":"4a864a3ec5e5c79a4987e8bddbd49b8483d9a4bcb65117ff0512bf9b08b6a111"} Mar 18 09:07:21.755120 master-0 kubenswrapper[26053]: I0318 09:07:21.755035 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c2e9661c-0359-460f-a97d-a06f2b572d23-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-tfl88\" (UID: \"c2e9661c-0359-460f-a97d-a06f2b572d23\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-tfl88" Mar 18 09:07:21.756169 master-0 kubenswrapper[26053]: E0318 09:07:21.755248 26053 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 18 09:07:21.756169 master-0 kubenswrapper[26053]: E0318 09:07:21.755338 26053 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2e9661c-0359-460f-a97d-a06f2b572d23-networking-console-plugin-cert podName:c2e9661c-0359-460f-a97d-a06f2b572d23 nodeName:}" failed. No retries permitted until 2026-03-18 09:07:53.755315225 +0000 UTC m=+261.248666616 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/c2e9661c-0359-460f-a97d-a06f2b572d23-networking-console-plugin-cert") pod "networking-console-plugin-7c6b76c555-tfl88" (UID: "c2e9661c-0359-460f-a97d-a06f2b572d23") : secret "networking-console-plugin-cert" not found Mar 18 09:07:29.236056 master-0 kubenswrapper[26053]: I0318 09:07:29.235951 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-5-master-0" podStartSLOduration=13.235925807 podStartE2EDuration="13.235925807s" podCreationTimestamp="2026-03-18 09:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:07:18.567261387 +0000 UTC m=+226.060612788" watchObservedRunningTime="2026-03-18 09:07:29.235925807 +0000 UTC m=+236.729277198" Mar 18 09:07:29.241740 master-0 kubenswrapper[26053]: I0318 09:07:29.241667 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp"] Mar 18 09:07:29.242184 master-0 kubenswrapper[26053]: I0318 09:07:29.242110 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" podUID="2a988b0d-7da8-4b03-9d05-996e2ce76fa7" containerName="controller-manager" containerID="cri-o://80767bbf40a264e6d24ae8b3b5969b77b4b535b99177b2ab99ec6cf8a14bdd49" gracePeriod=30 Mar 18 09:07:29.249679 master-0 kubenswrapper[26053]: I0318 09:07:29.249616 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67"] Mar 18 09:07:29.249905 master-0 kubenswrapper[26053]: I0318 09:07:29.249870 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67" podUID="a892bd16-5671-47dc-96dd-f82bbe71739f" containerName="route-controller-manager" containerID="cri-o://1df5000054a78f0f67fb3c03509e277e54e86d09de4d9763061eb1ac14332924" gracePeriod=30 Mar 18 09:07:29.626290 master-0 kubenswrapper[26053]: I0318 09:07:29.626225 26053 generic.go:334] "Generic (PLEG): container finished" podID="2a988b0d-7da8-4b03-9d05-996e2ce76fa7" containerID="80767bbf40a264e6d24ae8b3b5969b77b4b535b99177b2ab99ec6cf8a14bdd49" exitCode=0 Mar 18 09:07:29.626539 master-0 kubenswrapper[26053]: I0318 09:07:29.626314 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" event={"ID":"2a988b0d-7da8-4b03-9d05-996e2ce76fa7","Type":"ContainerDied","Data":"80767bbf40a264e6d24ae8b3b5969b77b4b535b99177b2ab99ec6cf8a14bdd49"} Mar 18 09:07:29.627748 master-0 kubenswrapper[26053]: I0318 09:07:29.627698 26053 generic.go:334] "Generic (PLEG): container finished" podID="a892bd16-5671-47dc-96dd-f82bbe71739f" containerID="1df5000054a78f0f67fb3c03509e277e54e86d09de4d9763061eb1ac14332924" exitCode=0 Mar 18 09:07:29.627799 master-0 kubenswrapper[26053]: I0318 09:07:29.627754 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67" event={"ID":"a892bd16-5671-47dc-96dd-f82bbe71739f","Type":"ContainerDied","Data":"1df5000054a78f0f67fb3c03509e277e54e86d09de4d9763061eb1ac14332924"} Mar 18 09:07:29.958595 master-0 kubenswrapper[26053]: I0318 09:07:29.958537 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67" Mar 18 09:07:30.007686 master-0 kubenswrapper[26053]: I0318 09:07:30.003682 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 18 09:07:30.007686 master-0 kubenswrapper[26053]: E0318 09:07:30.003996 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a892bd16-5671-47dc-96dd-f82bbe71739f" containerName="route-controller-manager" Mar 18 09:07:30.007686 master-0 kubenswrapper[26053]: I0318 09:07:30.004012 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="a892bd16-5671-47dc-96dd-f82bbe71739f" containerName="route-controller-manager" Mar 18 09:07:30.007686 master-0 kubenswrapper[26053]: I0318 09:07:30.004223 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="a892bd16-5671-47dc-96dd-f82bbe71739f" containerName="route-controller-manager" Mar 18 09:07:30.007686 master-0 kubenswrapper[26053]: I0318 09:07:30.004765 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:07:30.007686 master-0 kubenswrapper[26053]: I0318 09:07:30.007061 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-jw7t8" Mar 18 09:07:30.007686 master-0 kubenswrapper[26053]: I0318 09:07:30.007096 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 18 09:07:30.011623 master-0 kubenswrapper[26053]: I0318 09:07:30.011430 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 18 09:07:30.096122 master-0 kubenswrapper[26053]: I0318 09:07:30.096067 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a892bd16-5671-47dc-96dd-f82bbe71739f-client-ca\") pod \"a892bd16-5671-47dc-96dd-f82bbe71739f\" (UID: \"a892bd16-5671-47dc-96dd-f82bbe71739f\") " Mar 18 09:07:30.096122 master-0 kubenswrapper[26053]: I0318 09:07:30.096122 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a892bd16-5671-47dc-96dd-f82bbe71739f-config\") pod \"a892bd16-5671-47dc-96dd-f82bbe71739f\" (UID: \"a892bd16-5671-47dc-96dd-f82bbe71739f\") " Mar 18 09:07:30.096365 master-0 kubenswrapper[26053]: I0318 09:07:30.096224 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a892bd16-5671-47dc-96dd-f82bbe71739f-serving-cert\") pod \"a892bd16-5671-47dc-96dd-f82bbe71739f\" (UID: \"a892bd16-5671-47dc-96dd-f82bbe71739f\") " Mar 18 09:07:30.096365 master-0 kubenswrapper[26053]: I0318 09:07:30.096251 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qspxm\" (UniqueName: \"kubernetes.io/projected/a892bd16-5671-47dc-96dd-f82bbe71739f-kube-api-access-qspxm\") pod \"a892bd16-5671-47dc-96dd-f82bbe71739f\" (UID: \"a892bd16-5671-47dc-96dd-f82bbe71739f\") " Mar 18 09:07:30.096445 master-0 kubenswrapper[26053]: I0318 09:07:30.096424 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/315ae422-1357-4fce-a2f4-eb10aaaaae24-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"315ae422-1357-4fce-a2f4-eb10aaaaae24\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:07:30.096485 master-0 kubenswrapper[26053]: I0318 09:07:30.096462 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/315ae422-1357-4fce-a2f4-eb10aaaaae24-kube-api-access\") pod \"installer-5-master-0\" (UID: \"315ae422-1357-4fce-a2f4-eb10aaaaae24\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:07:30.096581 master-0 kubenswrapper[26053]: I0318 09:07:30.096542 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/315ae422-1357-4fce-a2f4-eb10aaaaae24-var-lock\") pod \"installer-5-master-0\" (UID: \"315ae422-1357-4fce-a2f4-eb10aaaaae24\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:07:30.097691 master-0 kubenswrapper[26053]: I0318 09:07:30.097657 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a892bd16-5671-47dc-96dd-f82bbe71739f-client-ca" (OuterVolumeSpecName: "client-ca") pod "a892bd16-5671-47dc-96dd-f82bbe71739f" (UID: "a892bd16-5671-47dc-96dd-f82bbe71739f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:07:30.098314 master-0 kubenswrapper[26053]: I0318 09:07:30.098289 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a892bd16-5671-47dc-96dd-f82bbe71739f-config" (OuterVolumeSpecName: "config") pod "a892bd16-5671-47dc-96dd-f82bbe71739f" (UID: "a892bd16-5671-47dc-96dd-f82bbe71739f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:07:30.100790 master-0 kubenswrapper[26053]: I0318 09:07:30.100753 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a892bd16-5671-47dc-96dd-f82bbe71739f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a892bd16-5671-47dc-96dd-f82bbe71739f" (UID: "a892bd16-5671-47dc-96dd-f82bbe71739f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:07:30.102544 master-0 kubenswrapper[26053]: I0318 09:07:30.102497 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a892bd16-5671-47dc-96dd-f82bbe71739f-kube-api-access-qspxm" (OuterVolumeSpecName: "kube-api-access-qspxm") pod "a892bd16-5671-47dc-96dd-f82bbe71739f" (UID: "a892bd16-5671-47dc-96dd-f82bbe71739f"). InnerVolumeSpecName "kube-api-access-qspxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:07:30.164608 master-0 kubenswrapper[26053]: I0318 09:07:30.164550 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" Mar 18 09:07:30.201692 master-0 kubenswrapper[26053]: I0318 09:07:30.198038 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/315ae422-1357-4fce-a2f4-eb10aaaaae24-var-lock\") pod \"installer-5-master-0\" (UID: \"315ae422-1357-4fce-a2f4-eb10aaaaae24\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:07:30.201692 master-0 kubenswrapper[26053]: I0318 09:07:30.198105 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/315ae422-1357-4fce-a2f4-eb10aaaaae24-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"315ae422-1357-4fce-a2f4-eb10aaaaae24\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:07:30.201692 master-0 kubenswrapper[26053]: I0318 09:07:30.198150 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/315ae422-1357-4fce-a2f4-eb10aaaaae24-kube-api-access\") pod \"installer-5-master-0\" (UID: \"315ae422-1357-4fce-a2f4-eb10aaaaae24\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:07:30.201692 master-0 kubenswrapper[26053]: I0318 09:07:30.198197 26053 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a892bd16-5671-47dc-96dd-f82bbe71739f-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:30.201692 master-0 kubenswrapper[26053]: I0318 09:07:30.198235 26053 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a892bd16-5671-47dc-96dd-f82bbe71739f-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:30.201692 master-0 kubenswrapper[26053]: I0318 09:07:30.198247 26053 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a892bd16-5671-47dc-96dd-f82bbe71739f-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:30.201692 master-0 kubenswrapper[26053]: I0318 09:07:30.198259 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qspxm\" (UniqueName: \"kubernetes.io/projected/a892bd16-5671-47dc-96dd-f82bbe71739f-kube-api-access-qspxm\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:30.201692 master-0 kubenswrapper[26053]: I0318 09:07:30.198615 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/315ae422-1357-4fce-a2f4-eb10aaaaae24-var-lock\") pod \"installer-5-master-0\" (UID: \"315ae422-1357-4fce-a2f4-eb10aaaaae24\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:07:30.201692 master-0 kubenswrapper[26053]: I0318 09:07:30.198667 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/315ae422-1357-4fce-a2f4-eb10aaaaae24-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"315ae422-1357-4fce-a2f4-eb10aaaaae24\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:07:30.216536 master-0 kubenswrapper[26053]: I0318 09:07:30.216490 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/315ae422-1357-4fce-a2f4-eb10aaaaae24-kube-api-access\") pod \"installer-5-master-0\" (UID: \"315ae422-1357-4fce-a2f4-eb10aaaaae24\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:07:30.299211 master-0 kubenswrapper[26053]: I0318 09:07:30.298750 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-client-ca\") pod \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\" (UID: \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\") " Mar 18 09:07:30.299211 master-0 kubenswrapper[26053]: I0318 09:07:30.298793 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-proxy-ca-bundles\") pod \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\" (UID: \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\") " Mar 18 09:07:30.299211 master-0 kubenswrapper[26053]: I0318 09:07:30.298823 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-config\") pod \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\" (UID: \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\") " Mar 18 09:07:30.299211 master-0 kubenswrapper[26053]: I0318 09:07:30.298885 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-serving-cert\") pod \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\" (UID: \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\") " Mar 18 09:07:30.299211 master-0 kubenswrapper[26053]: I0318 09:07:30.298968 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9tc\" (UniqueName: \"kubernetes.io/projected/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-kube-api-access-hm9tc\") pod \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\" (UID: \"2a988b0d-7da8-4b03-9d05-996e2ce76fa7\") " Mar 18 09:07:30.301101 master-0 kubenswrapper[26053]: I0318 09:07:30.299230 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-client-ca" (OuterVolumeSpecName: "client-ca") pod "2a988b0d-7da8-4b03-9d05-996e2ce76fa7" (UID: "2a988b0d-7da8-4b03-9d05-996e2ce76fa7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:07:30.301101 master-0 kubenswrapper[26053]: I0318 09:07:30.299331 26053 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:30.301101 master-0 kubenswrapper[26053]: I0318 09:07:30.299508 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2a988b0d-7da8-4b03-9d05-996e2ce76fa7" (UID: "2a988b0d-7da8-4b03-9d05-996e2ce76fa7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:07:30.301101 master-0 kubenswrapper[26053]: I0318 09:07:30.299542 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-config" (OuterVolumeSpecName: "config") pod "2a988b0d-7da8-4b03-9d05-996e2ce76fa7" (UID: "2a988b0d-7da8-4b03-9d05-996e2ce76fa7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:07:30.303956 master-0 kubenswrapper[26053]: I0318 09:07:30.303923 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2a988b0d-7da8-4b03-9d05-996e2ce76fa7" (UID: "2a988b0d-7da8-4b03-9d05-996e2ce76fa7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:07:30.304200 master-0 kubenswrapper[26053]: I0318 09:07:30.304166 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-kube-api-access-hm9tc" (OuterVolumeSpecName: "kube-api-access-hm9tc") pod "2a988b0d-7da8-4b03-9d05-996e2ce76fa7" (UID: "2a988b0d-7da8-4b03-9d05-996e2ce76fa7"). InnerVolumeSpecName "kube-api-access-hm9tc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:07:30.360060 master-0 kubenswrapper[26053]: I0318 09:07:30.360000 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:07:30.401209 master-0 kubenswrapper[26053]: I0318 09:07:30.401168 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hm9tc\" (UniqueName: \"kubernetes.io/projected/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-kube-api-access-hm9tc\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:30.401209 master-0 kubenswrapper[26053]: I0318 09:07:30.401201 26053 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:30.401209 master-0 kubenswrapper[26053]: I0318 09:07:30.401212 26053 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:30.401444 master-0 kubenswrapper[26053]: I0318 09:07:30.401220 26053 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a988b0d-7da8-4b03-9d05-996e2ce76fa7-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:30.497100 master-0 kubenswrapper[26053]: I0318 09:07:30.494633 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25"] Mar 18 09:07:30.497100 master-0 kubenswrapper[26053]: E0318 09:07:30.494983 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a988b0d-7da8-4b03-9d05-996e2ce76fa7" containerName="controller-manager" Mar 18 09:07:30.497100 master-0 kubenswrapper[26053]: I0318 09:07:30.494996 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a988b0d-7da8-4b03-9d05-996e2ce76fa7" containerName="controller-manager" Mar 18 09:07:30.497100 master-0 kubenswrapper[26053]: I0318 09:07:30.495113 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a988b0d-7da8-4b03-9d05-996e2ce76fa7" containerName="controller-manager" Mar 18 09:07:30.497100 master-0 kubenswrapper[26053]: I0318 09:07:30.495609 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25" Mar 18 09:07:30.518889 master-0 kubenswrapper[26053]: I0318 09:07:30.517359 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-75bcc985b9-fhwrp"] Mar 18 09:07:30.523266 master-0 kubenswrapper[26053]: I0318 09:07:30.522290 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75bcc985b9-fhwrp" Mar 18 09:07:30.532897 master-0 kubenswrapper[26053]: I0318 09:07:30.530754 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25"] Mar 18 09:07:30.535992 master-0 kubenswrapper[26053]: I0318 09:07:30.535902 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75bcc985b9-fhwrp"] Mar 18 09:07:30.604310 master-0 kubenswrapper[26053]: I0318 09:07:30.604243 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d-client-ca\") pod \"route-controller-manager-85d54c98fb-6zb25\" (UID: \"c882a85d-94f0-4aaf-9ebb-3f5a1684e08d\") " pod="openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25" Mar 18 09:07:30.604310 master-0 kubenswrapper[26053]: I0318 09:07:30.604323 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdr5z\" (UniqueName: \"kubernetes.io/projected/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d-kube-api-access-fdr5z\") pod \"route-controller-manager-85d54c98fb-6zb25\" (UID: \"c882a85d-94f0-4aaf-9ebb-3f5a1684e08d\") " pod="openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25" Mar 18 09:07:30.604618 master-0 kubenswrapper[26053]: I0318 09:07:30.604547 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d-serving-cert\") pod \"route-controller-manager-85d54c98fb-6zb25\" (UID: \"c882a85d-94f0-4aaf-9ebb-3f5a1684e08d\") " pod="openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25" Mar 18 09:07:30.604811 master-0 kubenswrapper[26053]: I0318 09:07:30.604782 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77acf2d3-ac90-472b-9692-6c95fb90759b-config\") pod \"controller-manager-75bcc985b9-fhwrp\" (UID: \"77acf2d3-ac90-472b-9692-6c95fb90759b\") " pod="openshift-controller-manager/controller-manager-75bcc985b9-fhwrp" Mar 18 09:07:30.604866 master-0 kubenswrapper[26053]: I0318 09:07:30.604840 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77acf2d3-ac90-472b-9692-6c95fb90759b-client-ca\") pod \"controller-manager-75bcc985b9-fhwrp\" (UID: \"77acf2d3-ac90-472b-9692-6c95fb90759b\") " pod="openshift-controller-manager/controller-manager-75bcc985b9-fhwrp" Mar 18 09:07:30.604909 master-0 kubenswrapper[26053]: I0318 09:07:30.604881 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77acf2d3-ac90-472b-9692-6c95fb90759b-serving-cert\") pod \"controller-manager-75bcc985b9-fhwrp\" (UID: \"77acf2d3-ac90-472b-9692-6c95fb90759b\") " pod="openshift-controller-manager/controller-manager-75bcc985b9-fhwrp" Mar 18 09:07:30.604953 master-0 kubenswrapper[26053]: I0318 09:07:30.604919 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77acf2d3-ac90-472b-9692-6c95fb90759b-proxy-ca-bundles\") pod \"controller-manager-75bcc985b9-fhwrp\" (UID: \"77acf2d3-ac90-472b-9692-6c95fb90759b\") " pod="openshift-controller-manager/controller-manager-75bcc985b9-fhwrp" Mar 18 09:07:30.605020 master-0 kubenswrapper[26053]: I0318 09:07:30.604988 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d-config\") pod \"route-controller-manager-85d54c98fb-6zb25\" (UID: \"c882a85d-94f0-4aaf-9ebb-3f5a1684e08d\") " pod="openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25" Mar 18 09:07:30.605088 master-0 kubenswrapper[26053]: I0318 09:07:30.605060 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v57r4\" (UniqueName: \"kubernetes.io/projected/77acf2d3-ac90-472b-9692-6c95fb90759b-kube-api-access-v57r4\") pod \"controller-manager-75bcc985b9-fhwrp\" (UID: \"77acf2d3-ac90-472b-9692-6c95fb90759b\") " pod="openshift-controller-manager/controller-manager-75bcc985b9-fhwrp" Mar 18 09:07:30.635368 master-0 kubenswrapper[26053]: I0318 09:07:30.635329 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" event={"ID":"2a988b0d-7da8-4b03-9d05-996e2ce76fa7","Type":"ContainerDied","Data":"ffdb7e52cb99048623d40657a8a5d99f9698550b54514b26ebc2732b1e6425fe"} Mar 18 09:07:30.635696 master-0 kubenswrapper[26053]: I0318 09:07:30.635380 26053 scope.go:117] "RemoveContainer" containerID="80767bbf40a264e6d24ae8b3b5969b77b4b535b99177b2ab99ec6cf8a14bdd49" Mar 18 09:07:30.635696 master-0 kubenswrapper[26053]: I0318 09:07:30.635491 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp" Mar 18 09:07:30.639414 master-0 kubenswrapper[26053]: I0318 09:07:30.639335 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67" event={"ID":"a892bd16-5671-47dc-96dd-f82bbe71739f","Type":"ContainerDied","Data":"81f61629142f9cc55e73284204979ded145e427c2f958381159bcec88c05b8a0"} Mar 18 09:07:30.639796 master-0 kubenswrapper[26053]: I0318 09:07:30.639610 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67" Mar 18 09:07:30.656743 master-0 kubenswrapper[26053]: I0318 09:07:30.656595 26053 scope.go:117] "RemoveContainer" containerID="1df5000054a78f0f67fb3c03509e277e54e86d09de4d9763061eb1ac14332924" Mar 18 09:07:30.689915 master-0 kubenswrapper[26053]: I0318 09:07:30.686176 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp"] Mar 18 09:07:30.697624 master-0 kubenswrapper[26053]: I0318 09:07:30.695706 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5f7d7c66b9-rmtxp"] Mar 18 09:07:30.700500 master-0 kubenswrapper[26053]: I0318 09:07:30.700456 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67"] Mar 18 09:07:30.704292 master-0 kubenswrapper[26053]: I0318 09:07:30.704261 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-65f5f9559d-c7h67"] Mar 18 09:07:30.706356 master-0 kubenswrapper[26053]: I0318 09:07:30.706299 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77acf2d3-ac90-472b-9692-6c95fb90759b-config\") pod \"controller-manager-75bcc985b9-fhwrp\" (UID: \"77acf2d3-ac90-472b-9692-6c95fb90759b\") " pod="openshift-controller-manager/controller-manager-75bcc985b9-fhwrp" Mar 18 09:07:30.706407 master-0 kubenswrapper[26053]: I0318 09:07:30.706364 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77acf2d3-ac90-472b-9692-6c95fb90759b-client-ca\") pod \"controller-manager-75bcc985b9-fhwrp\" (UID: \"77acf2d3-ac90-472b-9692-6c95fb90759b\") " pod="openshift-controller-manager/controller-manager-75bcc985b9-fhwrp" Mar 18 09:07:30.706536 master-0 kubenswrapper[26053]: I0318 09:07:30.706512 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77acf2d3-ac90-472b-9692-6c95fb90759b-serving-cert\") pod \"controller-manager-75bcc985b9-fhwrp\" (UID: \"77acf2d3-ac90-472b-9692-6c95fb90759b\") " pod="openshift-controller-manager/controller-manager-75bcc985b9-fhwrp" Mar 18 09:07:30.706591 master-0 kubenswrapper[26053]: I0318 09:07:30.706553 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77acf2d3-ac90-472b-9692-6c95fb90759b-proxy-ca-bundles\") pod \"controller-manager-75bcc985b9-fhwrp\" (UID: \"77acf2d3-ac90-472b-9692-6c95fb90759b\") " pod="openshift-controller-manager/controller-manager-75bcc985b9-fhwrp" Mar 18 09:07:30.706633 master-0 kubenswrapper[26053]: I0318 09:07:30.706595 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d-config\") pod \"route-controller-manager-85d54c98fb-6zb25\" (UID: \"c882a85d-94f0-4aaf-9ebb-3f5a1684e08d\") " pod="openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25" Mar 18 09:07:30.706666 master-0 kubenswrapper[26053]: I0318 09:07:30.706633 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v57r4\" (UniqueName: \"kubernetes.io/projected/77acf2d3-ac90-472b-9692-6c95fb90759b-kube-api-access-v57r4\") pod \"controller-manager-75bcc985b9-fhwrp\" (UID: \"77acf2d3-ac90-472b-9692-6c95fb90759b\") " pod="openshift-controller-manager/controller-manager-75bcc985b9-fhwrp" Mar 18 09:07:30.706728 master-0 kubenswrapper[26053]: I0318 09:07:30.706710 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d-client-ca\") pod \"route-controller-manager-85d54c98fb-6zb25\" (UID: \"c882a85d-94f0-4aaf-9ebb-3f5a1684e08d\") " pod="openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25" Mar 18 09:07:30.706769 master-0 kubenswrapper[26053]: I0318 09:07:30.706752 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdr5z\" (UniqueName: \"kubernetes.io/projected/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d-kube-api-access-fdr5z\") pod \"route-controller-manager-85d54c98fb-6zb25\" (UID: \"c882a85d-94f0-4aaf-9ebb-3f5a1684e08d\") " pod="openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25" Mar 18 09:07:30.706849 master-0 kubenswrapper[26053]: I0318 09:07:30.706833 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d-serving-cert\") pod \"route-controller-manager-85d54c98fb-6zb25\" (UID: \"c882a85d-94f0-4aaf-9ebb-3f5a1684e08d\") " pod="openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25" Mar 18 09:07:30.707509 master-0 kubenswrapper[26053]: I0318 09:07:30.707469 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/77acf2d3-ac90-472b-9692-6c95fb90759b-client-ca\") pod \"controller-manager-75bcc985b9-fhwrp\" (UID: \"77acf2d3-ac90-472b-9692-6c95fb90759b\") " pod="openshift-controller-manager/controller-manager-75bcc985b9-fhwrp" Mar 18 09:07:30.708661 master-0 kubenswrapper[26053]: I0318 09:07:30.708630 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77acf2d3-ac90-472b-9692-6c95fb90759b-proxy-ca-bundles\") pod \"controller-manager-75bcc985b9-fhwrp\" (UID: \"77acf2d3-ac90-472b-9692-6c95fb90759b\") " pod="openshift-controller-manager/controller-manager-75bcc985b9-fhwrp" Mar 18 09:07:30.708842 master-0 kubenswrapper[26053]: I0318 09:07:30.708812 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d-config\") pod \"route-controller-manager-85d54c98fb-6zb25\" (UID: \"c882a85d-94f0-4aaf-9ebb-3f5a1684e08d\") " pod="openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25" Mar 18 09:07:30.708939 master-0 kubenswrapper[26053]: I0318 09:07:30.708901 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d-client-ca\") pod \"route-controller-manager-85d54c98fb-6zb25\" (UID: \"c882a85d-94f0-4aaf-9ebb-3f5a1684e08d\") " pod="openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25" Mar 18 09:07:30.711267 master-0 kubenswrapper[26053]: I0318 09:07:30.711235 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d-serving-cert\") pod \"route-controller-manager-85d54c98fb-6zb25\" (UID: \"c882a85d-94f0-4aaf-9ebb-3f5a1684e08d\") " pod="openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25" Mar 18 09:07:30.711996 master-0 kubenswrapper[26053]: I0318 09:07:30.711672 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77acf2d3-ac90-472b-9692-6c95fb90759b-serving-cert\") pod \"controller-manager-75bcc985b9-fhwrp\" (UID: \"77acf2d3-ac90-472b-9692-6c95fb90759b\") " pod="openshift-controller-manager/controller-manager-75bcc985b9-fhwrp" Mar 18 09:07:30.711996 master-0 kubenswrapper[26053]: I0318 09:07:30.711717 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77acf2d3-ac90-472b-9692-6c95fb90759b-config\") pod \"controller-manager-75bcc985b9-fhwrp\" (UID: \"77acf2d3-ac90-472b-9692-6c95fb90759b\") " pod="openshift-controller-manager/controller-manager-75bcc985b9-fhwrp" Mar 18 09:07:30.730184 master-0 kubenswrapper[26053]: I0318 09:07:30.730140 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v57r4\" (UniqueName: \"kubernetes.io/projected/77acf2d3-ac90-472b-9692-6c95fb90759b-kube-api-access-v57r4\") pod \"controller-manager-75bcc985b9-fhwrp\" (UID: \"77acf2d3-ac90-472b-9692-6c95fb90759b\") " pod="openshift-controller-manager/controller-manager-75bcc985b9-fhwrp" Mar 18 09:07:30.733423 master-0 kubenswrapper[26053]: I0318 09:07:30.733379 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdr5z\" (UniqueName: \"kubernetes.io/projected/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d-kube-api-access-fdr5z\") pod \"route-controller-manager-85d54c98fb-6zb25\" (UID: \"c882a85d-94f0-4aaf-9ebb-3f5a1684e08d\") " pod="openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25" Mar 18 09:07:30.758668 master-0 kubenswrapper[26053]: I0318 09:07:30.757549 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a988b0d-7da8-4b03-9d05-996e2ce76fa7" path="/var/lib/kubelet/pods/2a988b0d-7da8-4b03-9d05-996e2ce76fa7/volumes" Mar 18 09:07:30.758668 master-0 kubenswrapper[26053]: I0318 09:07:30.758408 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a892bd16-5671-47dc-96dd-f82bbe71739f" path="/var/lib/kubelet/pods/a892bd16-5671-47dc-96dd-f82bbe71739f/volumes" Mar 18 09:07:30.789402 master-0 kubenswrapper[26053]: I0318 09:07:30.789346 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 18 09:07:30.794803 master-0 kubenswrapper[26053]: W0318 09:07:30.794748 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod315ae422_1357_4fce_a2f4_eb10aaaaae24.slice/crio-bf3a80cdf9125d0b266a8f72ca246c84551d22148fcba12a993ec6103b376d7a WatchSource:0}: Error finding container bf3a80cdf9125d0b266a8f72ca246c84551d22148fcba12a993ec6103b376d7a: Status 404 returned error can't find the container with id bf3a80cdf9125d0b266a8f72ca246c84551d22148fcba12a993ec6103b376d7a Mar 18 09:07:30.838891 master-0 kubenswrapper[26053]: I0318 09:07:30.838835 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25" Mar 18 09:07:30.851296 master-0 kubenswrapper[26053]: I0318 09:07:30.851234 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75bcc985b9-fhwrp" Mar 18 09:07:31.250014 master-0 kubenswrapper[26053]: I0318 09:07:31.249938 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25"] Mar 18 09:07:31.340028 master-0 kubenswrapper[26053]: I0318 09:07:31.339966 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75bcc985b9-fhwrp"] Mar 18 09:07:31.650005 master-0 kubenswrapper[26053]: I0318 09:07:31.649888 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75bcc985b9-fhwrp" event={"ID":"77acf2d3-ac90-472b-9692-6c95fb90759b","Type":"ContainerStarted","Data":"b1dfc42936c71a76248da7bcf08e355b89534413eaef94fba1e4789d521c3353"} Mar 18 09:07:31.650005 master-0 kubenswrapper[26053]: I0318 09:07:31.649946 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75bcc985b9-fhwrp" event={"ID":"77acf2d3-ac90-472b-9692-6c95fb90759b","Type":"ContainerStarted","Data":"9ecaba49406ca6caae149922e98bd61e410f56d13d6ba2a5d5250c0f87fa40f8"} Mar 18 09:07:31.650296 master-0 kubenswrapper[26053]: I0318 09:07:31.650271 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-75bcc985b9-fhwrp" Mar 18 09:07:31.653519 master-0 kubenswrapper[26053]: I0318 09:07:31.651872 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25" event={"ID":"c882a85d-94f0-4aaf-9ebb-3f5a1684e08d","Type":"ContainerStarted","Data":"ae59817d98db1bfdc3a525437b7428fec317c4df2d42690ddf7ee12110fe3a1d"} Mar 18 09:07:31.653519 master-0 kubenswrapper[26053]: I0318 09:07:31.651927 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25" event={"ID":"c882a85d-94f0-4aaf-9ebb-3f5a1684e08d","Type":"ContainerStarted","Data":"f4e2724b0dc639bc47578abbe42f8d787fe7841f58dbb03eaaf4ec1f697626a5"} Mar 18 09:07:31.653519 master-0 kubenswrapper[26053]: I0318 09:07:31.652115 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25" Mar 18 09:07:31.653809 master-0 kubenswrapper[26053]: I0318 09:07:31.653728 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"315ae422-1357-4fce-a2f4-eb10aaaaae24","Type":"ContainerStarted","Data":"67906f7bd6518ef838c9e8ed5bb8263d8a8999589f9fb8651a28ae883631860d"} Mar 18 09:07:31.653891 master-0 kubenswrapper[26053]: I0318 09:07:31.653815 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"315ae422-1357-4fce-a2f4-eb10aaaaae24","Type":"ContainerStarted","Data":"bf3a80cdf9125d0b266a8f72ca246c84551d22148fcba12a993ec6103b376d7a"} Mar 18 09:07:31.658406 master-0 kubenswrapper[26053]: I0318 09:07:31.658366 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-75bcc985b9-fhwrp" Mar 18 09:07:31.671910 master-0 kubenswrapper[26053]: I0318 09:07:31.671820 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-75bcc985b9-fhwrp" podStartSLOduration=2.67178818 podStartE2EDuration="2.67178818s" podCreationTimestamp="2026-03-18 09:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:07:31.670800526 +0000 UTC m=+239.164151907" watchObservedRunningTime="2026-03-18 09:07:31.67178818 +0000 UTC m=+239.165139591" Mar 18 09:07:31.712354 master-0 kubenswrapper[26053]: I0318 09:07:31.712264 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-0" podStartSLOduration=2.712243437 podStartE2EDuration="2.712243437s" podCreationTimestamp="2026-03-18 09:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:07:31.709202311 +0000 UTC m=+239.202553702" watchObservedRunningTime="2026-03-18 09:07:31.712243437 +0000 UTC m=+239.205594818" Mar 18 09:07:31.878052 master-0 kubenswrapper[26053]: I0318 09:07:31.878001 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25" Mar 18 09:07:31.902368 master-0 kubenswrapper[26053]: I0318 09:07:31.902207 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25" podStartSLOduration=2.90218582 podStartE2EDuration="2.90218582s" podCreationTimestamp="2026-03-18 09:07:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:07:31.785036667 +0000 UTC m=+239.278388038" watchObservedRunningTime="2026-03-18 09:07:31.90218582 +0000 UTC m=+239.395537201" Mar 18 09:07:33.931456 master-0 kubenswrapper[26053]: I0318 09:07:33.931376 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d9cb85584-jfkbk"] Mar 18 09:07:33.932302 master-0 kubenswrapper[26053]: I0318 09:07:33.932265 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:33.994537 master-0 kubenswrapper[26053]: I0318 09:07:33.994492 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d9cb85584-jfkbk"] Mar 18 09:07:34.056910 master-0 kubenswrapper[26053]: I0318 09:07:34.056846 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/09e381d6-17ca-4df3-a45f-22b95a1dc12f-service-ca\") pod \"console-5d9cb85584-jfkbk\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:34.056910 master-0 kubenswrapper[26053]: I0318 09:07:34.056898 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09e381d6-17ca-4df3-a45f-22b95a1dc12f-trusted-ca-bundle\") pod \"console-5d9cb85584-jfkbk\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:34.056910 master-0 kubenswrapper[26053]: I0318 09:07:34.056919 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h66qm\" (UniqueName: \"kubernetes.io/projected/09e381d6-17ca-4df3-a45f-22b95a1dc12f-kube-api-access-h66qm\") pod \"console-5d9cb85584-jfkbk\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:34.057197 master-0 kubenswrapper[26053]: I0318 09:07:34.056965 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/09e381d6-17ca-4df3-a45f-22b95a1dc12f-console-config\") pod \"console-5d9cb85584-jfkbk\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:34.057197 master-0 kubenswrapper[26053]: I0318 09:07:34.056988 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/09e381d6-17ca-4df3-a45f-22b95a1dc12f-console-oauth-config\") pod \"console-5d9cb85584-jfkbk\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:34.057197 master-0 kubenswrapper[26053]: I0318 09:07:34.057023 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/09e381d6-17ca-4df3-a45f-22b95a1dc12f-oauth-serving-cert\") pod \"console-5d9cb85584-jfkbk\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:34.057197 master-0 kubenswrapper[26053]: I0318 09:07:34.057041 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/09e381d6-17ca-4df3-a45f-22b95a1dc12f-console-serving-cert\") pod \"console-5d9cb85584-jfkbk\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:34.158099 master-0 kubenswrapper[26053]: I0318 09:07:34.158050 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/09e381d6-17ca-4df3-a45f-22b95a1dc12f-oauth-serving-cert\") pod \"console-5d9cb85584-jfkbk\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:34.158099 master-0 kubenswrapper[26053]: I0318 09:07:34.158096 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/09e381d6-17ca-4df3-a45f-22b95a1dc12f-console-serving-cert\") pod \"console-5d9cb85584-jfkbk\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:34.158391 master-0 kubenswrapper[26053]: I0318 09:07:34.158257 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/09e381d6-17ca-4df3-a45f-22b95a1dc12f-service-ca\") pod \"console-5d9cb85584-jfkbk\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:34.158489 master-0 kubenswrapper[26053]: I0318 09:07:34.158439 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09e381d6-17ca-4df3-a45f-22b95a1dc12f-trusted-ca-bundle\") pod \"console-5d9cb85584-jfkbk\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:34.158549 master-0 kubenswrapper[26053]: I0318 09:07:34.158498 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h66qm\" (UniqueName: \"kubernetes.io/projected/09e381d6-17ca-4df3-a45f-22b95a1dc12f-kube-api-access-h66qm\") pod \"console-5d9cb85584-jfkbk\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:34.158659 master-0 kubenswrapper[26053]: I0318 09:07:34.158639 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/09e381d6-17ca-4df3-a45f-22b95a1dc12f-console-config\") pod \"console-5d9cb85584-jfkbk\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:34.158768 master-0 kubenswrapper[26053]: I0318 09:07:34.158748 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/09e381d6-17ca-4df3-a45f-22b95a1dc12f-console-oauth-config\") pod \"console-5d9cb85584-jfkbk\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:34.159280 master-0 kubenswrapper[26053]: I0318 09:07:34.159249 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/09e381d6-17ca-4df3-a45f-22b95a1dc12f-service-ca\") pod \"console-5d9cb85584-jfkbk\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:34.159356 master-0 kubenswrapper[26053]: I0318 09:07:34.159321 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09e381d6-17ca-4df3-a45f-22b95a1dc12f-trusted-ca-bundle\") pod \"console-5d9cb85584-jfkbk\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:34.159552 master-0 kubenswrapper[26053]: I0318 09:07:34.159509 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/09e381d6-17ca-4df3-a45f-22b95a1dc12f-console-config\") pod \"console-5d9cb85584-jfkbk\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:34.159816 master-0 kubenswrapper[26053]: I0318 09:07:34.159788 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/09e381d6-17ca-4df3-a45f-22b95a1dc12f-oauth-serving-cert\") pod \"console-5d9cb85584-jfkbk\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:34.162274 master-0 kubenswrapper[26053]: I0318 09:07:34.162235 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/09e381d6-17ca-4df3-a45f-22b95a1dc12f-console-serving-cert\") pod \"console-5d9cb85584-jfkbk\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:34.162464 master-0 kubenswrapper[26053]: I0318 09:07:34.162441 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/09e381d6-17ca-4df3-a45f-22b95a1dc12f-console-oauth-config\") pod \"console-5d9cb85584-jfkbk\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:34.177156 master-0 kubenswrapper[26053]: I0318 09:07:34.177116 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h66qm\" (UniqueName: \"kubernetes.io/projected/09e381d6-17ca-4df3-a45f-22b95a1dc12f-kube-api-access-h66qm\") pod \"console-5d9cb85584-jfkbk\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:34.257142 master-0 kubenswrapper[26053]: I0318 09:07:34.257086 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:34.821037 master-0 kubenswrapper[26053]: I0318 09:07:34.820987 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d9cb85584-jfkbk"] Mar 18 09:07:35.700094 master-0 kubenswrapper[26053]: I0318 09:07:35.700028 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9cb85584-jfkbk" event={"ID":"09e381d6-17ca-4df3-a45f-22b95a1dc12f","Type":"ContainerStarted","Data":"155d62001c4b4a0a9cdbb3b6706bb412d1fe11db184da9574d84c776892e9090"} Mar 18 09:07:35.700094 master-0 kubenswrapper[26053]: I0318 09:07:35.700104 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9cb85584-jfkbk" event={"ID":"09e381d6-17ca-4df3-a45f-22b95a1dc12f","Type":"ContainerStarted","Data":"6913c62288658645d8511a0bfad2d1c705dff63bb6ff0460e2744da85cf4ca17"} Mar 18 09:07:35.723067 master-0 kubenswrapper[26053]: I0318 09:07:35.722987 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d9cb85584-jfkbk" podStartSLOduration=2.722968953 podStartE2EDuration="2.722968953s" podCreationTimestamp="2026-03-18 09:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:07:35.718489941 +0000 UTC m=+243.211841322" watchObservedRunningTime="2026-03-18 09:07:35.722968953 +0000 UTC m=+243.216320334" Mar 18 09:07:44.257597 master-0 kubenswrapper[26053]: I0318 09:07:44.257491 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:44.257597 master-0 kubenswrapper[26053]: I0318 09:07:44.257598 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:44.264187 master-0 kubenswrapper[26053]: I0318 09:07:44.264136 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:44.778317 master-0 kubenswrapper[26053]: I0318 09:07:44.778225 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:07:45.082246 master-0 kubenswrapper[26053]: I0318 09:07:45.082112 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7748c6b99d-fkjm5"] Mar 18 09:07:46.103467 master-0 kubenswrapper[26053]: I0318 09:07:46.103102 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 09:07:46.105628 master-0 kubenswrapper[26053]: I0318 09:07:46.104902 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:07:46.107156 master-0 kubenswrapper[26053]: I0318 09:07:46.107116 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-6mb4h" Mar 18 09:07:46.108889 master-0 kubenswrapper[26053]: I0318 09:07:46.108826 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 18 09:07:46.137639 master-0 kubenswrapper[26053]: I0318 09:07:46.134607 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 09:07:46.274036 master-0 kubenswrapper[26053]: I0318 09:07:46.273937 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c36ffe6-e550-400e-9cf5-883d543fbb05-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"4c36ffe6-e550-400e-9cf5-883d543fbb05\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:07:46.274383 master-0 kubenswrapper[26053]: I0318 09:07:46.274117 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4c36ffe6-e550-400e-9cf5-883d543fbb05-var-lock\") pod \"installer-5-master-0\" (UID: \"4c36ffe6-e550-400e-9cf5-883d543fbb05\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:07:46.274383 master-0 kubenswrapper[26053]: I0318 09:07:46.274200 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c36ffe6-e550-400e-9cf5-883d543fbb05-kube-api-access\") pod \"installer-5-master-0\" (UID: \"4c36ffe6-e550-400e-9cf5-883d543fbb05\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:07:46.376670 master-0 kubenswrapper[26053]: I0318 09:07:46.376447 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4c36ffe6-e550-400e-9cf5-883d543fbb05-var-lock\") pod \"installer-5-master-0\" (UID: \"4c36ffe6-e550-400e-9cf5-883d543fbb05\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:07:46.376670 master-0 kubenswrapper[26053]: I0318 09:07:46.376558 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4c36ffe6-e550-400e-9cf5-883d543fbb05-var-lock\") pod \"installer-5-master-0\" (UID: \"4c36ffe6-e550-400e-9cf5-883d543fbb05\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:07:46.376670 master-0 kubenswrapper[26053]: I0318 09:07:46.376643 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c36ffe6-e550-400e-9cf5-883d543fbb05-kube-api-access\") pod \"installer-5-master-0\" (UID: \"4c36ffe6-e550-400e-9cf5-883d543fbb05\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:07:46.377106 master-0 kubenswrapper[26053]: I0318 09:07:46.376906 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c36ffe6-e550-400e-9cf5-883d543fbb05-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"4c36ffe6-e550-400e-9cf5-883d543fbb05\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:07:46.377106 master-0 kubenswrapper[26053]: I0318 09:07:46.377044 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c36ffe6-e550-400e-9cf5-883d543fbb05-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"4c36ffe6-e550-400e-9cf5-883d543fbb05\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:07:46.409409 master-0 kubenswrapper[26053]: I0318 09:07:46.409356 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c36ffe6-e550-400e-9cf5-883d543fbb05-kube-api-access\") pod \"installer-5-master-0\" (UID: \"4c36ffe6-e550-400e-9cf5-883d543fbb05\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:07:46.453874 master-0 kubenswrapper[26053]: I0318 09:07:46.453794 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:07:46.926940 master-0 kubenswrapper[26053]: I0318 09:07:46.926831 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 09:07:46.929727 master-0 kubenswrapper[26053]: W0318 09:07:46.929603 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4c36ffe6_e550_400e_9cf5_883d543fbb05.slice/crio-048a506e1de498dee7fc7b995c202c0841d636d426b490b1c873c2fc0cb74148 WatchSource:0}: Error finding container 048a506e1de498dee7fc7b995c202c0841d636d426b490b1c873c2fc0cb74148: Status 404 returned error can't find the container with id 048a506e1de498dee7fc7b995c202c0841d636d426b490b1c873c2fc0cb74148 Mar 18 09:07:47.793759 master-0 kubenswrapper[26053]: I0318 09:07:47.793677 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"4c36ffe6-e550-400e-9cf5-883d543fbb05","Type":"ContainerStarted","Data":"bea8d394712dceb969d9b7f07a209d4a66be22db6c1ba8b611f9661d6abcda20"} Mar 18 09:07:47.793759 master-0 kubenswrapper[26053]: I0318 09:07:47.793742 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"4c36ffe6-e550-400e-9cf5-883d543fbb05","Type":"ContainerStarted","Data":"048a506e1de498dee7fc7b995c202c0841d636d426b490b1c873c2fc0cb74148"} Mar 18 09:07:47.817334 master-0 kubenswrapper[26053]: I0318 09:07:47.817234 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=1.817212573 podStartE2EDuration="1.817212573s" podCreationTimestamp="2026-03-18 09:07:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:07:47.815402358 +0000 UTC m=+255.308753749" watchObservedRunningTime="2026-03-18 09:07:47.817212573 +0000 UTC m=+255.310563964" Mar 18 09:07:49.247206 master-0 kubenswrapper[26053]: I0318 09:07:49.247140 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25"] Mar 18 09:07:49.247826 master-0 kubenswrapper[26053]: I0318 09:07:49.247435 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25" podUID="c882a85d-94f0-4aaf-9ebb-3f5a1684e08d" containerName="route-controller-manager" containerID="cri-o://ae59817d98db1bfdc3a525437b7428fec317c4df2d42690ddf7ee12110fe3a1d" gracePeriod=30 Mar 18 09:07:49.810001 master-0 kubenswrapper[26053]: I0318 09:07:49.809904 26053 generic.go:334] "Generic (PLEG): container finished" podID="c882a85d-94f0-4aaf-9ebb-3f5a1684e08d" containerID="ae59817d98db1bfdc3a525437b7428fec317c4df2d42690ddf7ee12110fe3a1d" exitCode=0 Mar 18 09:07:49.810001 master-0 kubenswrapper[26053]: I0318 09:07:49.809990 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25" event={"ID":"c882a85d-94f0-4aaf-9ebb-3f5a1684e08d","Type":"ContainerDied","Data":"ae59817d98db1bfdc3a525437b7428fec317c4df2d42690ddf7ee12110fe3a1d"} Mar 18 09:07:49.810303 master-0 kubenswrapper[26053]: I0318 09:07:49.810056 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25" event={"ID":"c882a85d-94f0-4aaf-9ebb-3f5a1684e08d","Type":"ContainerDied","Data":"f4e2724b0dc639bc47578abbe42f8d787fe7841f58dbb03eaaf4ec1f697626a5"} Mar 18 09:07:49.810303 master-0 kubenswrapper[26053]: I0318 09:07:49.810075 26053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4e2724b0dc639bc47578abbe42f8d787fe7841f58dbb03eaaf4ec1f697626a5" Mar 18 09:07:49.848306 master-0 kubenswrapper[26053]: I0318 09:07:49.848273 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25" Mar 18 09:07:49.932512 master-0 kubenswrapper[26053]: I0318 09:07:49.932464 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d-config\") pod \"c882a85d-94f0-4aaf-9ebb-3f5a1684e08d\" (UID: \"c882a85d-94f0-4aaf-9ebb-3f5a1684e08d\") " Mar 18 09:07:49.932771 master-0 kubenswrapper[26053]: I0318 09:07:49.932646 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d-client-ca\") pod \"c882a85d-94f0-4aaf-9ebb-3f5a1684e08d\" (UID: \"c882a85d-94f0-4aaf-9ebb-3f5a1684e08d\") " Mar 18 09:07:49.932771 master-0 kubenswrapper[26053]: I0318 09:07:49.932703 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdr5z\" (UniqueName: \"kubernetes.io/projected/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d-kube-api-access-fdr5z\") pod \"c882a85d-94f0-4aaf-9ebb-3f5a1684e08d\" (UID: \"c882a85d-94f0-4aaf-9ebb-3f5a1684e08d\") " Mar 18 09:07:49.932771 master-0 kubenswrapper[26053]: I0318 09:07:49.932722 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d-serving-cert\") pod \"c882a85d-94f0-4aaf-9ebb-3f5a1684e08d\" (UID: \"c882a85d-94f0-4aaf-9ebb-3f5a1684e08d\") " Mar 18 09:07:49.934470 master-0 kubenswrapper[26053]: I0318 09:07:49.934412 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d-client-ca" (OuterVolumeSpecName: "client-ca") pod "c882a85d-94f0-4aaf-9ebb-3f5a1684e08d" (UID: "c882a85d-94f0-4aaf-9ebb-3f5a1684e08d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:07:49.934999 master-0 kubenswrapper[26053]: I0318 09:07:49.934934 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d-config" (OuterVolumeSpecName: "config") pod "c882a85d-94f0-4aaf-9ebb-3f5a1684e08d" (UID: "c882a85d-94f0-4aaf-9ebb-3f5a1684e08d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:07:49.938253 master-0 kubenswrapper[26053]: I0318 09:07:49.938206 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d-kube-api-access-fdr5z" (OuterVolumeSpecName: "kube-api-access-fdr5z") pod "c882a85d-94f0-4aaf-9ebb-3f5a1684e08d" (UID: "c882a85d-94f0-4aaf-9ebb-3f5a1684e08d"). InnerVolumeSpecName "kube-api-access-fdr5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:07:49.943198 master-0 kubenswrapper[26053]: I0318 09:07:49.942610 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c882a85d-94f0-4aaf-9ebb-3f5a1684e08d" (UID: "c882a85d-94f0-4aaf-9ebb-3f5a1684e08d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:07:50.034751 master-0 kubenswrapper[26053]: I0318 09:07:50.034679 26053 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:50.034751 master-0 kubenswrapper[26053]: I0318 09:07:50.034740 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdr5z\" (UniqueName: \"kubernetes.io/projected/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d-kube-api-access-fdr5z\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:50.034751 master-0 kubenswrapper[26053]: I0318 09:07:50.034752 26053 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:50.034751 master-0 kubenswrapper[26053]: I0318 09:07:50.034761 26053 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:50.353083 master-0 kubenswrapper[26053]: I0318 09:07:50.352992 26053 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:07:50.353668 master-0 kubenswrapper[26053]: I0318 09:07:50.353461 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="2902db65fe16fd26bf5e57c38292ff3f" containerName="cluster-policy-controller" containerID="cri-o://9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba" gracePeriod=30 Mar 18 09:07:50.353668 master-0 kubenswrapper[26053]: I0318 09:07:50.353612 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="2902db65fe16fd26bf5e57c38292ff3f" containerName="kube-controller-manager" containerID="cri-o://711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43" gracePeriod=30 Mar 18 09:07:50.353799 master-0 kubenswrapper[26053]: I0318 09:07:50.353694 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="2902db65fe16fd26bf5e57c38292ff3f" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0" gracePeriod=30 Mar 18 09:07:50.353799 master-0 kubenswrapper[26053]: I0318 09:07:50.353776 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="2902db65fe16fd26bf5e57c38292ff3f" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427" gracePeriod=30 Mar 18 09:07:50.356201 master-0 kubenswrapper[26053]: I0318 09:07:50.356163 26053 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:07:50.356692 master-0 kubenswrapper[26053]: E0318 09:07:50.356659 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2902db65fe16fd26bf5e57c38292ff3f" containerName="kube-controller-manager" Mar 18 09:07:50.356692 master-0 kubenswrapper[26053]: I0318 09:07:50.356690 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="2902db65fe16fd26bf5e57c38292ff3f" containerName="kube-controller-manager" Mar 18 09:07:50.356811 master-0 kubenswrapper[26053]: E0318 09:07:50.356715 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2902db65fe16fd26bf5e57c38292ff3f" containerName="kube-controller-manager" Mar 18 09:07:50.356811 master-0 kubenswrapper[26053]: I0318 09:07:50.356726 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="2902db65fe16fd26bf5e57c38292ff3f" containerName="kube-controller-manager" Mar 18 09:07:50.356811 master-0 kubenswrapper[26053]: E0318 09:07:50.356775 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c882a85d-94f0-4aaf-9ebb-3f5a1684e08d" containerName="route-controller-manager" Mar 18 09:07:50.356811 master-0 kubenswrapper[26053]: I0318 09:07:50.356785 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="c882a85d-94f0-4aaf-9ebb-3f5a1684e08d" containerName="route-controller-manager" Mar 18 09:07:50.356811 master-0 kubenswrapper[26053]: E0318 09:07:50.356796 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2902db65fe16fd26bf5e57c38292ff3f" containerName="kube-controller-manager-cert-syncer" Mar 18 09:07:50.356811 master-0 kubenswrapper[26053]: I0318 09:07:50.356806 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="2902db65fe16fd26bf5e57c38292ff3f" containerName="kube-controller-manager-cert-syncer" Mar 18 09:07:50.357040 master-0 kubenswrapper[26053]: E0318 09:07:50.356858 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2902db65fe16fd26bf5e57c38292ff3f" containerName="cluster-policy-controller" Mar 18 09:07:50.357040 master-0 kubenswrapper[26053]: I0318 09:07:50.356870 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="2902db65fe16fd26bf5e57c38292ff3f" containerName="cluster-policy-controller" Mar 18 09:07:50.357040 master-0 kubenswrapper[26053]: E0318 09:07:50.356883 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2902db65fe16fd26bf5e57c38292ff3f" containerName="kube-controller-manager-recovery-controller" Mar 18 09:07:50.357040 master-0 kubenswrapper[26053]: I0318 09:07:50.356891 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="2902db65fe16fd26bf5e57c38292ff3f" containerName="kube-controller-manager-recovery-controller" Mar 18 09:07:50.357200 master-0 kubenswrapper[26053]: I0318 09:07:50.357075 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="c882a85d-94f0-4aaf-9ebb-3f5a1684e08d" containerName="route-controller-manager" Mar 18 09:07:50.357200 master-0 kubenswrapper[26053]: I0318 09:07:50.357095 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="2902db65fe16fd26bf5e57c38292ff3f" containerName="cluster-policy-controller" Mar 18 09:07:50.357200 master-0 kubenswrapper[26053]: I0318 09:07:50.357130 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="2902db65fe16fd26bf5e57c38292ff3f" containerName="kube-controller-manager" Mar 18 09:07:50.357200 master-0 kubenswrapper[26053]: I0318 09:07:50.357148 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="2902db65fe16fd26bf5e57c38292ff3f" containerName="kube-controller-manager-recovery-controller" Mar 18 09:07:50.357200 master-0 kubenswrapper[26053]: I0318 09:07:50.357168 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="2902db65fe16fd26bf5e57c38292ff3f" containerName="kube-controller-manager" Mar 18 09:07:50.357200 master-0 kubenswrapper[26053]: I0318 09:07:50.357189 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="2902db65fe16fd26bf5e57c38292ff3f" containerName="kube-controller-manager-cert-syncer" Mar 18 09:07:50.442390 master-0 kubenswrapper[26053]: I0318 09:07:50.442310 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/60c2ba061fb7c3edad3900526541ee3c-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"60c2ba061fb7c3edad3900526541ee3c\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:07:50.442541 master-0 kubenswrapper[26053]: I0318 09:07:50.442435 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/60c2ba061fb7c3edad3900526541ee3c-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"60c2ba061fb7c3edad3900526541ee3c\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:07:50.499954 master-0 kubenswrapper[26053]: I0318 09:07:50.499390 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d789b8c64-mz9nz"] Mar 18 09:07:50.500691 master-0 kubenswrapper[26053]: I0318 09:07:50.500597 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d789b8c64-mz9nz" Mar 18 09:07:50.505100 master-0 kubenswrapper[26053]: I0318 09:07:50.504505 26053 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="2902db65fe16fd26bf5e57c38292ff3f" podUID="60c2ba061fb7c3edad3900526541ee3c" Mar 18 09:07:50.510195 master-0 kubenswrapper[26053]: I0318 09:07:50.510142 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d789b8c64-mz9nz"] Mar 18 09:07:50.543846 master-0 kubenswrapper[26053]: I0318 09:07:50.543807 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/60c2ba061fb7c3edad3900526541ee3c-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"60c2ba061fb7c3edad3900526541ee3c\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:07:50.544097 master-0 kubenswrapper[26053]: I0318 09:07:50.543906 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/60c2ba061fb7c3edad3900526541ee3c-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"60c2ba061fb7c3edad3900526541ee3c\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:07:50.544097 master-0 kubenswrapper[26053]: I0318 09:07:50.543922 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/60c2ba061fb7c3edad3900526541ee3c-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"60c2ba061fb7c3edad3900526541ee3c\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:07:50.544097 master-0 kubenswrapper[26053]: I0318 09:07:50.543996 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/60c2ba061fb7c3edad3900526541ee3c-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"60c2ba061fb7c3edad3900526541ee3c\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:07:50.645440 master-0 kubenswrapper[26053]: I0318 09:07:50.645222 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71883781-b869-466d-88f7-dca17ef336e3-serving-cert\") pod \"route-controller-manager-6d789b8c64-mz9nz\" (UID: \"71883781-b869-466d-88f7-dca17ef336e3\") " pod="openshift-route-controller-manager/route-controller-manager-6d789b8c64-mz9nz" Mar 18 09:07:50.645440 master-0 kubenswrapper[26053]: I0318 09:07:50.645407 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71883781-b869-466d-88f7-dca17ef336e3-config\") pod \"route-controller-manager-6d789b8c64-mz9nz\" (UID: \"71883781-b869-466d-88f7-dca17ef336e3\") " pod="openshift-route-controller-manager/route-controller-manager-6d789b8c64-mz9nz" Mar 18 09:07:50.645879 master-0 kubenswrapper[26053]: I0318 09:07:50.645656 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71883781-b869-466d-88f7-dca17ef336e3-client-ca\") pod \"route-controller-manager-6d789b8c64-mz9nz\" (UID: \"71883781-b869-466d-88f7-dca17ef336e3\") " pod="openshift-route-controller-manager/route-controller-manager-6d789b8c64-mz9nz" Mar 18 09:07:50.646076 master-0 kubenswrapper[26053]: I0318 09:07:50.645970 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvnp6\" (UniqueName: \"kubernetes.io/projected/71883781-b869-466d-88f7-dca17ef336e3-kube-api-access-gvnp6\") pod \"route-controller-manager-6d789b8c64-mz9nz\" (UID: \"71883781-b869-466d-88f7-dca17ef336e3\") " pod="openshift-route-controller-manager/route-controller-manager-6d789b8c64-mz9nz" Mar 18 09:07:50.675822 master-0 kubenswrapper[26053]: I0318 09:07:50.675762 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_2902db65fe16fd26bf5e57c38292ff3f/kube-controller-manager-cert-syncer/0.log" Mar 18 09:07:50.676981 master-0 kubenswrapper[26053]: I0318 09:07:50.676952 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_2902db65fe16fd26bf5e57c38292ff3f/kube-controller-manager/0.log" Mar 18 09:07:50.676981 master-0 kubenswrapper[26053]: I0318 09:07:50.677044 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:07:50.680623 master-0 kubenswrapper[26053]: I0318 09:07:50.680520 26053 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="2902db65fe16fd26bf5e57c38292ff3f" podUID="60c2ba061fb7c3edad3900526541ee3c" Mar 18 09:07:50.747785 master-0 kubenswrapper[26053]: I0318 09:07:50.747702 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71883781-b869-466d-88f7-dca17ef336e3-serving-cert\") pod \"route-controller-manager-6d789b8c64-mz9nz\" (UID: \"71883781-b869-466d-88f7-dca17ef336e3\") " pod="openshift-route-controller-manager/route-controller-manager-6d789b8c64-mz9nz" Mar 18 09:07:50.748101 master-0 kubenswrapper[26053]: I0318 09:07:50.747830 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71883781-b869-466d-88f7-dca17ef336e3-config\") pod \"route-controller-manager-6d789b8c64-mz9nz\" (UID: \"71883781-b869-466d-88f7-dca17ef336e3\") " pod="openshift-route-controller-manager/route-controller-manager-6d789b8c64-mz9nz" Mar 18 09:07:50.748101 master-0 kubenswrapper[26053]: I0318 09:07:50.747867 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71883781-b869-466d-88f7-dca17ef336e3-client-ca\") pod \"route-controller-manager-6d789b8c64-mz9nz\" (UID: \"71883781-b869-466d-88f7-dca17ef336e3\") " pod="openshift-route-controller-manager/route-controller-manager-6d789b8c64-mz9nz" Mar 18 09:07:50.748101 master-0 kubenswrapper[26053]: I0318 09:07:50.747916 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvnp6\" (UniqueName: \"kubernetes.io/projected/71883781-b869-466d-88f7-dca17ef336e3-kube-api-access-gvnp6\") pod \"route-controller-manager-6d789b8c64-mz9nz\" (UID: \"71883781-b869-466d-88f7-dca17ef336e3\") " pod="openshift-route-controller-manager/route-controller-manager-6d789b8c64-mz9nz" Mar 18 09:07:50.750004 master-0 kubenswrapper[26053]: I0318 09:07:50.749592 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71883781-b869-466d-88f7-dca17ef336e3-config\") pod \"route-controller-manager-6d789b8c64-mz9nz\" (UID: \"71883781-b869-466d-88f7-dca17ef336e3\") " pod="openshift-route-controller-manager/route-controller-manager-6d789b8c64-mz9nz" Mar 18 09:07:50.750152 master-0 kubenswrapper[26053]: I0318 09:07:50.750046 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/71883781-b869-466d-88f7-dca17ef336e3-client-ca\") pod \"route-controller-manager-6d789b8c64-mz9nz\" (UID: \"71883781-b869-466d-88f7-dca17ef336e3\") " pod="openshift-route-controller-manager/route-controller-manager-6d789b8c64-mz9nz" Mar 18 09:07:50.752376 master-0 kubenswrapper[26053]: I0318 09:07:50.752319 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71883781-b869-466d-88f7-dca17ef336e3-serving-cert\") pod \"route-controller-manager-6d789b8c64-mz9nz\" (UID: \"71883781-b869-466d-88f7-dca17ef336e3\") " pod="openshift-route-controller-manager/route-controller-manager-6d789b8c64-mz9nz" Mar 18 09:07:50.771089 master-0 kubenswrapper[26053]: I0318 09:07:50.771017 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvnp6\" (UniqueName: \"kubernetes.io/projected/71883781-b869-466d-88f7-dca17ef336e3-kube-api-access-gvnp6\") pod \"route-controller-manager-6d789b8c64-mz9nz\" (UID: \"71883781-b869-466d-88f7-dca17ef336e3\") " pod="openshift-route-controller-manager/route-controller-manager-6d789b8c64-mz9nz" Mar 18 09:07:50.823148 master-0 kubenswrapper[26053]: I0318 09:07:50.823071 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_2902db65fe16fd26bf5e57c38292ff3f/kube-controller-manager-cert-syncer/0.log" Mar 18 09:07:50.824934 master-0 kubenswrapper[26053]: I0318 09:07:50.824871 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_2902db65fe16fd26bf5e57c38292ff3f/kube-controller-manager/0.log" Mar 18 09:07:50.825077 master-0 kubenswrapper[26053]: I0318 09:07:50.824975 26053 generic.go:334] "Generic (PLEG): container finished" podID="2902db65fe16fd26bf5e57c38292ff3f" containerID="711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43" exitCode=0 Mar 18 09:07:50.825077 master-0 kubenswrapper[26053]: I0318 09:07:50.825008 26053 generic.go:334] "Generic (PLEG): container finished" podID="2902db65fe16fd26bf5e57c38292ff3f" containerID="3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0" exitCode=0 Mar 18 09:07:50.825077 master-0 kubenswrapper[26053]: I0318 09:07:50.825027 26053 generic.go:334] "Generic (PLEG): container finished" podID="2902db65fe16fd26bf5e57c38292ff3f" containerID="1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427" exitCode=2 Mar 18 09:07:50.825077 master-0 kubenswrapper[26053]: I0318 09:07:50.825041 26053 generic.go:334] "Generic (PLEG): container finished" podID="2902db65fe16fd26bf5e57c38292ff3f" containerID="9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba" exitCode=0 Mar 18 09:07:50.825701 master-0 kubenswrapper[26053]: I0318 09:07:50.825112 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:07:50.825833 master-0 kubenswrapper[26053]: I0318 09:07:50.825092 26053 scope.go:117] "RemoveContainer" containerID="711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43" Mar 18 09:07:50.827875 master-0 kubenswrapper[26053]: I0318 09:07:50.827823 26053 generic.go:334] "Generic (PLEG): container finished" podID="7dcc6db5-f20e-431f-9f0b-818bd3830f41" containerID="4a864a3ec5e5c79a4987e8bddbd49b8483d9a4bcb65117ff0512bf9b08b6a111" exitCode=0 Mar 18 09:07:50.828018 master-0 kubenswrapper[26053]: I0318 09:07:50.827870 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"7dcc6db5-f20e-431f-9f0b-818bd3830f41","Type":"ContainerDied","Data":"4a864a3ec5e5c79a4987e8bddbd49b8483d9a4bcb65117ff0512bf9b08b6a111"} Mar 18 09:07:50.828018 master-0 kubenswrapper[26053]: I0318 09:07:50.827978 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25" Mar 18 09:07:50.831322 master-0 kubenswrapper[26053]: I0318 09:07:50.830975 26053 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="2902db65fe16fd26bf5e57c38292ff3f" podUID="60c2ba061fb7c3edad3900526541ee3c" Mar 18 09:07:50.849650 master-0 kubenswrapper[26053]: I0318 09:07:50.849554 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2902db65fe16fd26bf5e57c38292ff3f-cert-dir\") pod \"2902db65fe16fd26bf5e57c38292ff3f\" (UID: \"2902db65fe16fd26bf5e57c38292ff3f\") " Mar 18 09:07:50.849650 master-0 kubenswrapper[26053]: I0318 09:07:50.849664 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2902db65fe16fd26bf5e57c38292ff3f-resource-dir\") pod \"2902db65fe16fd26bf5e57c38292ff3f\" (UID: \"2902db65fe16fd26bf5e57c38292ff3f\") " Mar 18 09:07:50.850061 master-0 kubenswrapper[26053]: I0318 09:07:50.849948 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2902db65fe16fd26bf5e57c38292ff3f-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "2902db65fe16fd26bf5e57c38292ff3f" (UID: "2902db65fe16fd26bf5e57c38292ff3f"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:07:50.850061 master-0 kubenswrapper[26053]: I0318 09:07:50.850024 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2902db65fe16fd26bf5e57c38292ff3f-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "2902db65fe16fd26bf5e57c38292ff3f" (UID: "2902db65fe16fd26bf5e57c38292ff3f"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:07:50.851176 master-0 kubenswrapper[26053]: I0318 09:07:50.851127 26053 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2902db65fe16fd26bf5e57c38292ff3f-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:50.851305 master-0 kubenswrapper[26053]: I0318 09:07:50.851195 26053 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2902db65fe16fd26bf5e57c38292ff3f-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:50.853976 master-0 kubenswrapper[26053]: I0318 09:07:50.853923 26053 scope.go:117] "RemoveContainer" containerID="3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0" Mar 18 09:07:50.885677 master-0 kubenswrapper[26053]: I0318 09:07:50.885584 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25"] Mar 18 09:07:50.888385 master-0 kubenswrapper[26053]: I0318 09:07:50.888327 26053 scope.go:117] "RemoveContainer" containerID="1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427" Mar 18 09:07:50.892110 master-0 kubenswrapper[26053]: I0318 09:07:50.892055 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85d54c98fb-6zb25"] Mar 18 09:07:50.919087 master-0 kubenswrapper[26053]: I0318 09:07:50.918988 26053 scope.go:117] "RemoveContainer" containerID="9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba" Mar 18 09:07:50.942864 master-0 kubenswrapper[26053]: I0318 09:07:50.942782 26053 scope.go:117] "RemoveContainer" containerID="c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc" Mar 18 09:07:50.970701 master-0 kubenswrapper[26053]: I0318 09:07:50.970605 26053 scope.go:117] "RemoveContainer" containerID="711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43" Mar 18 09:07:50.971095 master-0 kubenswrapper[26053]: I0318 09:07:50.971042 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d789b8c64-mz9nz" Mar 18 09:07:50.971744 master-0 kubenswrapper[26053]: E0318 09:07:50.971687 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43\": container with ID starting with 711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43 not found: ID does not exist" containerID="711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43" Mar 18 09:07:50.971939 master-0 kubenswrapper[26053]: I0318 09:07:50.971745 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43"} err="failed to get container status \"711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43\": rpc error: code = NotFound desc = could not find container \"711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43\": container with ID starting with 711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43 not found: ID does not exist" Mar 18 09:07:50.971939 master-0 kubenswrapper[26053]: I0318 09:07:50.971785 26053 scope.go:117] "RemoveContainer" containerID="3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0" Mar 18 09:07:50.972750 master-0 kubenswrapper[26053]: E0318 09:07:50.972699 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0\": container with ID starting with 3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0 not found: ID does not exist" containerID="3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0" Mar 18 09:07:50.972886 master-0 kubenswrapper[26053]: I0318 09:07:50.972770 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0"} err="failed to get container status \"3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0\": rpc error: code = NotFound desc = could not find container \"3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0\": container with ID starting with 3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0 not found: ID does not exist" Mar 18 09:07:50.972886 master-0 kubenswrapper[26053]: I0318 09:07:50.972823 26053 scope.go:117] "RemoveContainer" containerID="1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427" Mar 18 09:07:50.973193 master-0 kubenswrapper[26053]: E0318 09:07:50.973150 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427\": container with ID starting with 1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427 not found: ID does not exist" containerID="1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427" Mar 18 09:07:50.973193 master-0 kubenswrapper[26053]: I0318 09:07:50.973180 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427"} err="failed to get container status \"1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427\": rpc error: code = NotFound desc = could not find container \"1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427\": container with ID starting with 1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427 not found: ID does not exist" Mar 18 09:07:50.973395 master-0 kubenswrapper[26053]: I0318 09:07:50.973205 26053 scope.go:117] "RemoveContainer" containerID="9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba" Mar 18 09:07:50.973716 master-0 kubenswrapper[26053]: E0318 09:07:50.973515 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba\": container with ID starting with 9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba not found: ID does not exist" containerID="9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba" Mar 18 09:07:50.973716 master-0 kubenswrapper[26053]: I0318 09:07:50.973632 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba"} err="failed to get container status \"9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba\": rpc error: code = NotFound desc = could not find container \"9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba\": container with ID starting with 9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba not found: ID does not exist" Mar 18 09:07:50.973716 master-0 kubenswrapper[26053]: I0318 09:07:50.973693 26053 scope.go:117] "RemoveContainer" containerID="c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc" Mar 18 09:07:50.974125 master-0 kubenswrapper[26053]: E0318 09:07:50.974081 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc\": container with ID starting with c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc not found: ID does not exist" containerID="c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc" Mar 18 09:07:50.974234 master-0 kubenswrapper[26053]: I0318 09:07:50.974115 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc"} err="failed to get container status \"c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc\": rpc error: code = NotFound desc = could not find container \"c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc\": container with ID starting with c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc not found: ID does not exist" Mar 18 09:07:50.974234 master-0 kubenswrapper[26053]: I0318 09:07:50.974147 26053 scope.go:117] "RemoveContainer" containerID="711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43" Mar 18 09:07:50.974623 master-0 kubenswrapper[26053]: I0318 09:07:50.974499 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43"} err="failed to get container status \"711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43\": rpc error: code = NotFound desc = could not find container \"711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43\": container with ID starting with 711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43 not found: ID does not exist" Mar 18 09:07:50.974765 master-0 kubenswrapper[26053]: I0318 09:07:50.974619 26053 scope.go:117] "RemoveContainer" containerID="3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0" Mar 18 09:07:50.975031 master-0 kubenswrapper[26053]: I0318 09:07:50.974988 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0"} err="failed to get container status \"3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0\": rpc error: code = NotFound desc = could not find container \"3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0\": container with ID starting with 3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0 not found: ID does not exist" Mar 18 09:07:50.975031 master-0 kubenswrapper[26053]: I0318 09:07:50.975016 26053 scope.go:117] "RemoveContainer" containerID="1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427" Mar 18 09:07:50.975371 master-0 kubenswrapper[26053]: I0318 09:07:50.975300 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427"} err="failed to get container status \"1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427\": rpc error: code = NotFound desc = could not find container \"1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427\": container with ID starting with 1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427 not found: ID does not exist" Mar 18 09:07:50.975371 master-0 kubenswrapper[26053]: I0318 09:07:50.975351 26053 scope.go:117] "RemoveContainer" containerID="9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba" Mar 18 09:07:50.975767 master-0 kubenswrapper[26053]: I0318 09:07:50.975628 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba"} err="failed to get container status \"9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba\": rpc error: code = NotFound desc = could not find container \"9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba\": container with ID starting with 9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba not found: ID does not exist" Mar 18 09:07:50.975767 master-0 kubenswrapper[26053]: I0318 09:07:50.975652 26053 scope.go:117] "RemoveContainer" containerID="c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc" Mar 18 09:07:50.975976 master-0 kubenswrapper[26053]: I0318 09:07:50.975905 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc"} err="failed to get container status \"c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc\": rpc error: code = NotFound desc = could not find container \"c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc\": container with ID starting with c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc not found: ID does not exist" Mar 18 09:07:50.975976 master-0 kubenswrapper[26053]: I0318 09:07:50.975946 26053 scope.go:117] "RemoveContainer" containerID="711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43" Mar 18 09:07:50.976237 master-0 kubenswrapper[26053]: I0318 09:07:50.976188 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43"} err="failed to get container status \"711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43\": rpc error: code = NotFound desc = could not find container \"711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43\": container with ID starting with 711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43 not found: ID does not exist" Mar 18 09:07:50.976237 master-0 kubenswrapper[26053]: I0318 09:07:50.976223 26053 scope.go:117] "RemoveContainer" containerID="3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0" Mar 18 09:07:50.976553 master-0 kubenswrapper[26053]: I0318 09:07:50.976480 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0"} err="failed to get container status \"3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0\": rpc error: code = NotFound desc = could not find container \"3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0\": container with ID starting with 3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0 not found: ID does not exist" Mar 18 09:07:50.976731 master-0 kubenswrapper[26053]: I0318 09:07:50.976523 26053 scope.go:117] "RemoveContainer" containerID="1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427" Mar 18 09:07:50.977143 master-0 kubenswrapper[26053]: I0318 09:07:50.977080 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427"} err="failed to get container status \"1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427\": rpc error: code = NotFound desc = could not find container \"1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427\": container with ID starting with 1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427 not found: ID does not exist" Mar 18 09:07:50.977143 master-0 kubenswrapper[26053]: I0318 09:07:50.977133 26053 scope.go:117] "RemoveContainer" containerID="9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba" Mar 18 09:07:50.977476 master-0 kubenswrapper[26053]: I0318 09:07:50.977424 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba"} err="failed to get container status \"9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba\": rpc error: code = NotFound desc = could not find container \"9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba\": container with ID starting with 9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba not found: ID does not exist" Mar 18 09:07:50.977476 master-0 kubenswrapper[26053]: I0318 09:07:50.977466 26053 scope.go:117] "RemoveContainer" containerID="c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc" Mar 18 09:07:50.977768 master-0 kubenswrapper[26053]: I0318 09:07:50.977731 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc"} err="failed to get container status \"c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc\": rpc error: code = NotFound desc = could not find container \"c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc\": container with ID starting with c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc not found: ID does not exist" Mar 18 09:07:50.977768 master-0 kubenswrapper[26053]: I0318 09:07:50.977760 26053 scope.go:117] "RemoveContainer" containerID="711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43" Mar 18 09:07:50.977980 master-0 kubenswrapper[26053]: I0318 09:07:50.977946 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43"} err="failed to get container status \"711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43\": rpc error: code = NotFound desc = could not find container \"711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43\": container with ID starting with 711f34ef0b89499647a665414237ec5e74e833aca95499d6d590aeb83921ae43 not found: ID does not exist" Mar 18 09:07:50.977980 master-0 kubenswrapper[26053]: I0318 09:07:50.977972 26053 scope.go:117] "RemoveContainer" containerID="3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0" Mar 18 09:07:50.978271 master-0 kubenswrapper[26053]: I0318 09:07:50.978223 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0"} err="failed to get container status \"3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0\": rpc error: code = NotFound desc = could not find container \"3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0\": container with ID starting with 3f928a7e48426102b58c53b94e18fcb61a93d5ccd3e1472c95bbec366dd76ae0 not found: ID does not exist" Mar 18 09:07:50.978271 master-0 kubenswrapper[26053]: I0318 09:07:50.978264 26053 scope.go:117] "RemoveContainer" containerID="1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427" Mar 18 09:07:50.978543 master-0 kubenswrapper[26053]: I0318 09:07:50.978505 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427"} err="failed to get container status \"1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427\": rpc error: code = NotFound desc = could not find container \"1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427\": container with ID starting with 1fe3e041232895ad33655a401eee4678f3d44d0850e51263631ba2016210c427 not found: ID does not exist" Mar 18 09:07:50.978543 master-0 kubenswrapper[26053]: I0318 09:07:50.978534 26053 scope.go:117] "RemoveContainer" containerID="9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba" Mar 18 09:07:50.978862 master-0 kubenswrapper[26053]: I0318 09:07:50.978808 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba"} err="failed to get container status \"9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba\": rpc error: code = NotFound desc = could not find container \"9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba\": container with ID starting with 9c5833e4d47a3451e6b74487810ae63c27cfb2cf69496fd4561b5ce16897f8ba not found: ID does not exist" Mar 18 09:07:50.978862 master-0 kubenswrapper[26053]: I0318 09:07:50.978856 26053 scope.go:117] "RemoveContainer" containerID="c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc" Mar 18 09:07:50.979116 master-0 kubenswrapper[26053]: I0318 09:07:50.979084 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc"} err="failed to get container status \"c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc\": rpc error: code = NotFound desc = could not find container \"c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc\": container with ID starting with c56798a0a0cb7c06043eeedb349cb026619415a6cb00d1c982b854d0bc2a44dc not found: ID does not exist" Mar 18 09:07:51.110152 master-0 kubenswrapper[26053]: I0318 09:07:51.110071 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 09:07:51.110416 master-0 kubenswrapper[26053]: I0318 09:07:51.110346 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-5-master-0" podUID="4c36ffe6-e550-400e-9cf5-883d543fbb05" containerName="installer" containerID="cri-o://bea8d394712dceb969d9b7f07a209d4a66be22db6c1ba8b611f9661d6abcda20" gracePeriod=30 Mar 18 09:07:51.157785 master-0 kubenswrapper[26053]: I0318 09:07:51.153372 26053 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="2902db65fe16fd26bf5e57c38292ff3f" podUID="60c2ba061fb7c3edad3900526541ee3c" Mar 18 09:07:51.451665 master-0 kubenswrapper[26053]: I0318 09:07:51.451561 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d789b8c64-mz9nz"] Mar 18 09:07:51.456506 master-0 kubenswrapper[26053]: W0318 09:07:51.456456 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71883781_b869_466d_88f7_dca17ef336e3.slice/crio-15ff593cc43babc8e59b5125bd76b0ccc4b3810556606b2fb39fd66cd6a44efb WatchSource:0}: Error finding container 15ff593cc43babc8e59b5125bd76b0ccc4b3810556606b2fb39fd66cd6a44efb: Status 404 returned error can't find the container with id 15ff593cc43babc8e59b5125bd76b0ccc4b3810556606b2fb39fd66cd6a44efb Mar 18 09:07:51.839138 master-0 kubenswrapper[26053]: I0318 09:07:51.839011 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d789b8c64-mz9nz" event={"ID":"71883781-b869-466d-88f7-dca17ef336e3","Type":"ContainerStarted","Data":"85a146b2ca15c897c21733feb2aa7a95d2a48cb5f2a1cfee18773489da2e2dfb"} Mar 18 09:07:51.839138 master-0 kubenswrapper[26053]: I0318 09:07:51.839117 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d789b8c64-mz9nz" event={"ID":"71883781-b869-466d-88f7-dca17ef336e3","Type":"ContainerStarted","Data":"15ff593cc43babc8e59b5125bd76b0ccc4b3810556606b2fb39fd66cd6a44efb"} Mar 18 09:07:51.868214 master-0 kubenswrapper[26053]: I0318 09:07:51.868099 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d789b8c64-mz9nz" podStartSLOduration=2.868078067 podStartE2EDuration="2.868078067s" podCreationTimestamp="2026-03-18 09:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:07:51.865864642 +0000 UTC m=+259.359216053" watchObservedRunningTime="2026-03-18 09:07:51.868078067 +0000 UTC m=+259.361429488" Mar 18 09:07:52.246402 master-0 kubenswrapper[26053]: I0318 09:07:52.246356 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:07:52.397911 master-0 kubenswrapper[26053]: I0318 09:07:52.397827 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7dcc6db5-f20e-431f-9f0b-818bd3830f41-kube-api-access\") pod \"7dcc6db5-f20e-431f-9f0b-818bd3830f41\" (UID: \"7dcc6db5-f20e-431f-9f0b-818bd3830f41\") " Mar 18 09:07:52.398185 master-0 kubenswrapper[26053]: I0318 09:07:52.397938 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7dcc6db5-f20e-431f-9f0b-818bd3830f41-kubelet-dir\") pod \"7dcc6db5-f20e-431f-9f0b-818bd3830f41\" (UID: \"7dcc6db5-f20e-431f-9f0b-818bd3830f41\") " Mar 18 09:07:52.398185 master-0 kubenswrapper[26053]: I0318 09:07:52.398011 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dcc6db5-f20e-431f-9f0b-818bd3830f41-var-lock\") pod \"7dcc6db5-f20e-431f-9f0b-818bd3830f41\" (UID: \"7dcc6db5-f20e-431f-9f0b-818bd3830f41\") " Mar 18 09:07:52.398285 master-0 kubenswrapper[26053]: I0318 09:07:52.398201 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dcc6db5-f20e-431f-9f0b-818bd3830f41-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7dcc6db5-f20e-431f-9f0b-818bd3830f41" (UID: "7dcc6db5-f20e-431f-9f0b-818bd3830f41"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:07:52.398463 master-0 kubenswrapper[26053]: I0318 09:07:52.398419 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7dcc6db5-f20e-431f-9f0b-818bd3830f41-var-lock" (OuterVolumeSpecName: "var-lock") pod "7dcc6db5-f20e-431f-9f0b-818bd3830f41" (UID: "7dcc6db5-f20e-431f-9f0b-818bd3830f41"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:07:52.398857 master-0 kubenswrapper[26053]: I0318 09:07:52.398811 26053 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7dcc6db5-f20e-431f-9f0b-818bd3830f41-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:52.398951 master-0 kubenswrapper[26053]: I0318 09:07:52.398866 26053 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7dcc6db5-f20e-431f-9f0b-818bd3830f41-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:52.401086 master-0 kubenswrapper[26053]: I0318 09:07:52.401025 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dcc6db5-f20e-431f-9f0b-818bd3830f41-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7dcc6db5-f20e-431f-9f0b-818bd3830f41" (UID: "7dcc6db5-f20e-431f-9f0b-818bd3830f41"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:07:52.500108 master-0 kubenswrapper[26053]: I0318 09:07:52.500029 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7dcc6db5-f20e-431f-9f0b-818bd3830f41-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:07:52.745852 master-0 kubenswrapper[26053]: I0318 09:07:52.745743 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2902db65fe16fd26bf5e57c38292ff3f" path="/var/lib/kubelet/pods/2902db65fe16fd26bf5e57c38292ff3f/volumes" Mar 18 09:07:52.747793 master-0 kubenswrapper[26053]: I0318 09:07:52.747739 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c882a85d-94f0-4aaf-9ebb-3f5a1684e08d" path="/var/lib/kubelet/pods/c882a85d-94f0-4aaf-9ebb-3f5a1684e08d/volumes" Mar 18 09:07:52.848212 master-0 kubenswrapper[26053]: I0318 09:07:52.848136 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 18 09:07:52.848544 master-0 kubenswrapper[26053]: I0318 09:07:52.848134 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"7dcc6db5-f20e-431f-9f0b-818bd3830f41","Type":"ContainerDied","Data":"36b456f37b2d26d7504619d17c9a22bdabf2d287e74babcf44fa7fce2a0bee98"} Mar 18 09:07:52.848544 master-0 kubenswrapper[26053]: I0318 09:07:52.848291 26053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36b456f37b2d26d7504619d17c9a22bdabf2d287e74babcf44fa7fce2a0bee98" Mar 18 09:07:52.848544 master-0 kubenswrapper[26053]: I0318 09:07:52.848495 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d789b8c64-mz9nz" Mar 18 09:07:52.854279 master-0 kubenswrapper[26053]: I0318 09:07:52.854215 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d789b8c64-mz9nz" Mar 18 09:07:53.819831 master-0 kubenswrapper[26053]: I0318 09:07:53.819749 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c2e9661c-0359-460f-a97d-a06f2b572d23-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-tfl88\" (UID: \"c2e9661c-0359-460f-a97d-a06f2b572d23\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-tfl88" Mar 18 09:07:53.825149 master-0 kubenswrapper[26053]: I0318 09:07:53.825100 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/c2e9661c-0359-460f-a97d-a06f2b572d23-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-tfl88\" (UID: \"c2e9661c-0359-460f-a97d-a06f2b572d23\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-tfl88" Mar 18 09:07:54.015962 master-0 kubenswrapper[26053]: I0318 09:07:54.015859 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c6b76c555-tfl88" Mar 18 09:07:54.309668 master-0 kubenswrapper[26053]: I0318 09:07:54.303786 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 18 09:07:54.309668 master-0 kubenswrapper[26053]: E0318 09:07:54.304338 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dcc6db5-f20e-431f-9f0b-818bd3830f41" containerName="installer" Mar 18 09:07:54.309668 master-0 kubenswrapper[26053]: I0318 09:07:54.304364 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dcc6db5-f20e-431f-9f0b-818bd3830f41" containerName="installer" Mar 18 09:07:54.309668 master-0 kubenswrapper[26053]: I0318 09:07:54.304627 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dcc6db5-f20e-431f-9f0b-818bd3830f41" containerName="installer" Mar 18 09:07:54.309668 master-0 kubenswrapper[26053]: I0318 09:07:54.305411 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 09:07:54.313807 master-0 kubenswrapper[26053]: I0318 09:07:54.313735 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 18 09:07:54.433606 master-0 kubenswrapper[26053]: I0318 09:07:54.433419 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1723c159-3187-46be-89bb-a529ca0c54db-kube-api-access\") pod \"installer-6-master-0\" (UID: \"1723c159-3187-46be-89bb-a529ca0c54db\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 09:07:54.433901 master-0 kubenswrapper[26053]: I0318 09:07:54.433736 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1723c159-3187-46be-89bb-a529ca0c54db-var-lock\") pod \"installer-6-master-0\" (UID: \"1723c159-3187-46be-89bb-a529ca0c54db\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 09:07:54.433901 master-0 kubenswrapper[26053]: I0318 09:07:54.433794 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1723c159-3187-46be-89bb-a529ca0c54db-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"1723c159-3187-46be-89bb-a529ca0c54db\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 09:07:54.534758 master-0 kubenswrapper[26053]: I0318 09:07:54.534646 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1723c159-3187-46be-89bb-a529ca0c54db-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"1723c159-3187-46be-89bb-a529ca0c54db\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 09:07:54.534758 master-0 kubenswrapper[26053]: I0318 09:07:54.534718 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1723c159-3187-46be-89bb-a529ca0c54db-kube-api-access\") pod \"installer-6-master-0\" (UID: \"1723c159-3187-46be-89bb-a529ca0c54db\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 09:07:54.535175 master-0 kubenswrapper[26053]: I0318 09:07:54.534814 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1723c159-3187-46be-89bb-a529ca0c54db-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"1723c159-3187-46be-89bb-a529ca0c54db\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 09:07:54.535175 master-0 kubenswrapper[26053]: I0318 09:07:54.534840 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1723c159-3187-46be-89bb-a529ca0c54db-var-lock\") pod \"installer-6-master-0\" (UID: \"1723c159-3187-46be-89bb-a529ca0c54db\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 09:07:54.535175 master-0 kubenswrapper[26053]: I0318 09:07:54.534874 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1723c159-3187-46be-89bb-a529ca0c54db-var-lock\") pod \"installer-6-master-0\" (UID: \"1723c159-3187-46be-89bb-a529ca0c54db\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 09:07:54.583022 master-0 kubenswrapper[26053]: I0318 09:07:54.582941 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-tfl88"] Mar 18 09:07:54.591275 master-0 kubenswrapper[26053]: I0318 09:07:54.591220 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1723c159-3187-46be-89bb-a529ca0c54db-kube-api-access\") pod \"installer-6-master-0\" (UID: \"1723c159-3187-46be-89bb-a529ca0c54db\") " pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 09:07:54.663271 master-0 kubenswrapper[26053]: I0318 09:07:54.663208 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 09:07:54.871900 master-0 kubenswrapper[26053]: I0318 09:07:54.871790 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-7c6b76c555-tfl88" event={"ID":"c2e9661c-0359-460f-a97d-a06f2b572d23","Type":"ContainerStarted","Data":"5ee639592449607f7f3f138bc2bee0d7830a6c229fb1c00b494a57c94ea74f2c"} Mar 18 09:07:55.067451 master-0 kubenswrapper[26053]: I0318 09:07:55.066591 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Mar 18 09:07:55.881935 master-0 kubenswrapper[26053]: I0318 09:07:55.881838 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"1723c159-3187-46be-89bb-a529ca0c54db","Type":"ContainerStarted","Data":"32724c056de4657bf1580f9b9722f5f0804388890f96ca693367772644921120"} Mar 18 09:07:55.881935 master-0 kubenswrapper[26053]: I0318 09:07:55.881905 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"1723c159-3187-46be-89bb-a529ca0c54db","Type":"ContainerStarted","Data":"065e6e9a9bf3a7a541110af4dfc16ea75dfe81736047d4d0a53cd3fe069e12df"} Mar 18 09:07:55.924648 master-0 kubenswrapper[26053]: I0318 09:07:55.924530 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-6-master-0" podStartSLOduration=1.924503079 podStartE2EDuration="1.924503079s" podCreationTimestamp="2026-03-18 09:07:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:07:55.91450442 +0000 UTC m=+263.407855881" watchObservedRunningTime="2026-03-18 09:07:55.924503079 +0000 UTC m=+263.417854500" Mar 18 09:07:56.895911 master-0 kubenswrapper[26053]: I0318 09:07:56.895806 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-7c6b76c555-tfl88" event={"ID":"c2e9661c-0359-460f-a97d-a06f2b572d23","Type":"ContainerStarted","Data":"22244507a41c58b0ce04e29d344f694428c3a8c3639373c6634996329c2e061d"} Mar 18 09:07:56.925278 master-0 kubenswrapper[26053]: I0318 09:07:56.925142 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-7c6b76c555-tfl88" podStartSLOduration=66.116062925 podStartE2EDuration="1m7.925101211s" podCreationTimestamp="2026-03-18 09:06:49 +0000 UTC" firstStartedPulling="2026-03-18 09:07:54.582713212 +0000 UTC m=+262.076064593" lastFinishedPulling="2026-03-18 09:07:56.391751498 +0000 UTC m=+263.885102879" observedRunningTime="2026-03-18 09:07:56.917214905 +0000 UTC m=+264.410566326" watchObservedRunningTime="2026-03-18 09:07:56.925101211 +0000 UTC m=+264.418452622" Mar 18 09:08:02.050238 master-0 kubenswrapper[26053]: I0318 09:08:02.050125 26053 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 09:08:02.051358 master-0 kubenswrapper[26053]: I0318 09:08:02.050556 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" containerID="cri-o://fdc009359c2112107085b8f02c63abd75fd4e2176cf53ef7347b4a4fd00d9538" gracePeriod=30 Mar 18 09:08:02.051358 master-0 kubenswrapper[26053]: I0318 09:08:02.050859 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-recovery-controller" containerID="cri-o://0e945016e5ad3ca612d9c28d9263f33536983ae254a451bec2931d7083e04391" gracePeriod=30 Mar 18 09:08:02.051358 master-0 kubenswrapper[26053]: I0318 09:08:02.050916 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-cert-syncer" containerID="cri-o://3c4fbfc45d70efcd01458c6d886e63fef0992127787572398e33c8e3ab34c520" gracePeriod=30 Mar 18 09:08:02.053845 master-0 kubenswrapper[26053]: I0318 09:08:02.053742 26053 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 09:08:02.054391 master-0 kubenswrapper[26053]: E0318 09:08:02.054296 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" Mar 18 09:08:02.054595 master-0 kubenswrapper[26053]: I0318 09:08:02.054412 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" Mar 18 09:08:02.054595 master-0 kubenswrapper[26053]: E0318 09:08:02.054445 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-recovery-controller" Mar 18 09:08:02.054595 master-0 kubenswrapper[26053]: I0318 09:08:02.054464 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-recovery-controller" Mar 18 09:08:02.054595 master-0 kubenswrapper[26053]: E0318 09:08:02.054492 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="wait-for-host-port" Mar 18 09:08:02.054595 master-0 kubenswrapper[26053]: I0318 09:08:02.054509 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="wait-for-host-port" Mar 18 09:08:02.054595 master-0 kubenswrapper[26053]: E0318 09:08:02.054541 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-cert-syncer" Mar 18 09:08:02.054595 master-0 kubenswrapper[26053]: I0318 09:08:02.054557 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-cert-syncer" Mar 18 09:08:02.055347 master-0 kubenswrapper[26053]: I0318 09:08:02.054975 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler" Mar 18 09:08:02.055347 master-0 kubenswrapper[26053]: I0318 09:08:02.055278 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-cert-syncer" Mar 18 09:08:02.055347 master-0 kubenswrapper[26053]: I0318 09:08:02.055305 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="8413125cf444e5c95f023c5dd9c6151e" containerName="kube-scheduler-recovery-controller" Mar 18 09:08:02.163618 master-0 kubenswrapper[26053]: I0318 09:08:02.162662 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:08:02.163618 master-0 kubenswrapper[26053]: I0318 09:08:02.162832 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:08:02.236016 master-0 kubenswrapper[26053]: I0318 09:08:02.235875 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler-cert-syncer/0.log" Mar 18 09:08:02.237752 master-0 kubenswrapper[26053]: I0318 09:08:02.237234 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:08:02.245391 master-0 kubenswrapper[26053]: I0318 09:08:02.245307 26053 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="8413125cf444e5c95f023c5dd9c6151e" podUID="8e27b7d086edf5d2cf47b703574641d8" Mar 18 09:08:02.265424 master-0 kubenswrapper[26053]: I0318 09:08:02.265343 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:08:02.265599 master-0 kubenswrapper[26053]: I0318 09:08:02.265450 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:08:02.265685 master-0 kubenswrapper[26053]: I0318 09:08:02.265580 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:08:02.265685 master-0 kubenswrapper[26053]: I0318 09:08:02.265625 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e27b7d086edf5d2cf47b703574641d8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8e27b7d086edf5d2cf47b703574641d8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:08:02.366415 master-0 kubenswrapper[26053]: I0318 09:08:02.366173 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"8413125cf444e5c95f023c5dd9c6151e\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " Mar 18 09:08:02.366415 master-0 kubenswrapper[26053]: I0318 09:08:02.366331 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "8413125cf444e5c95f023c5dd9c6151e" (UID: "8413125cf444e5c95f023c5dd9c6151e"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:08:02.366415 master-0 kubenswrapper[26053]: I0318 09:08:02.366349 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"8413125cf444e5c95f023c5dd9c6151e\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " Mar 18 09:08:02.366415 master-0 kubenswrapper[26053]: I0318 09:08:02.366392 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "8413125cf444e5c95f023c5dd9c6151e" (UID: "8413125cf444e5c95f023c5dd9c6151e"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:08:02.367007 master-0 kubenswrapper[26053]: I0318 09:08:02.366989 26053 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:08:02.367109 master-0 kubenswrapper[26053]: I0318 09:08:02.367016 26053 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:08:02.750732 master-0 kubenswrapper[26053]: I0318 09:08:02.750609 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8413125cf444e5c95f023c5dd9c6151e" path="/var/lib/kubelet/pods/8413125cf444e5c95f023c5dd9c6151e/volumes" Mar 18 09:08:02.945932 master-0 kubenswrapper[26053]: I0318 09:08:02.945883 26053 generic.go:334] "Generic (PLEG): container finished" podID="315ae422-1357-4fce-a2f4-eb10aaaaae24" containerID="67906f7bd6518ef838c9e8ed5bb8263d8a8999589f9fb8651a28ae883631860d" exitCode=0 Mar 18 09:08:02.946156 master-0 kubenswrapper[26053]: I0318 09:08:02.945940 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"315ae422-1357-4fce-a2f4-eb10aaaaae24","Type":"ContainerDied","Data":"67906f7bd6518ef838c9e8ed5bb8263d8a8999589f9fb8651a28ae883631860d"} Mar 18 09:08:02.951678 master-0 kubenswrapper[26053]: I0318 09:08:02.951296 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler-cert-syncer/0.log" Mar 18 09:08:02.951897 master-0 kubenswrapper[26053]: I0318 09:08:02.951867 26053 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="0e945016e5ad3ca612d9c28d9263f33536983ae254a451bec2931d7083e04391" exitCode=0 Mar 18 09:08:02.951897 master-0 kubenswrapper[26053]: I0318 09:08:02.951891 26053 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="3c4fbfc45d70efcd01458c6d886e63fef0992127787572398e33c8e3ab34c520" exitCode=2 Mar 18 09:08:02.951994 master-0 kubenswrapper[26053]: I0318 09:08:02.951902 26053 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="fdc009359c2112107085b8f02c63abd75fd4e2176cf53ef7347b4a4fd00d9538" exitCode=0 Mar 18 09:08:02.951994 master-0 kubenswrapper[26053]: I0318 09:08:02.951989 26053 scope.go:117] "RemoveContainer" containerID="0e945016e5ad3ca612d9c28d9263f33536983ae254a451bec2931d7083e04391" Mar 18 09:08:02.952378 master-0 kubenswrapper[26053]: I0318 09:08:02.952270 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:08:02.966423 master-0 kubenswrapper[26053]: I0318 09:08:02.966374 26053 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="8413125cf444e5c95f023c5dd9c6151e" podUID="8e27b7d086edf5d2cf47b703574641d8" Mar 18 09:08:02.975877 master-0 kubenswrapper[26053]: I0318 09:08:02.975821 26053 scope.go:117] "RemoveContainer" containerID="3c4fbfc45d70efcd01458c6d886e63fef0992127787572398e33c8e3ab34c520" Mar 18 09:08:02.991694 master-0 kubenswrapper[26053]: I0318 09:08:02.991659 26053 scope.go:117] "RemoveContainer" containerID="fdc009359c2112107085b8f02c63abd75fd4e2176cf53ef7347b4a4fd00d9538" Mar 18 09:08:03.006508 master-0 kubenswrapper[26053]: I0318 09:08:03.006479 26053 scope.go:117] "RemoveContainer" containerID="99a0327fcc44e8bda038947a1b4ee1b04fc65e4463d2a3b38bd274ec5429c287" Mar 18 09:08:03.027495 master-0 kubenswrapper[26053]: I0318 09:08:03.027437 26053 scope.go:117] "RemoveContainer" containerID="0e945016e5ad3ca612d9c28d9263f33536983ae254a451bec2931d7083e04391" Mar 18 09:08:03.027952 master-0 kubenswrapper[26053]: E0318 09:08:03.027928 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e945016e5ad3ca612d9c28d9263f33536983ae254a451bec2931d7083e04391\": container with ID starting with 0e945016e5ad3ca612d9c28d9263f33536983ae254a451bec2931d7083e04391 not found: ID does not exist" containerID="0e945016e5ad3ca612d9c28d9263f33536983ae254a451bec2931d7083e04391" Mar 18 09:08:03.028045 master-0 kubenswrapper[26053]: I0318 09:08:03.027960 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e945016e5ad3ca612d9c28d9263f33536983ae254a451bec2931d7083e04391"} err="failed to get container status \"0e945016e5ad3ca612d9c28d9263f33536983ae254a451bec2931d7083e04391\": rpc error: code = NotFound desc = could not find container \"0e945016e5ad3ca612d9c28d9263f33536983ae254a451bec2931d7083e04391\": container with ID starting with 0e945016e5ad3ca612d9c28d9263f33536983ae254a451bec2931d7083e04391 not found: ID does not exist" Mar 18 09:08:03.028045 master-0 kubenswrapper[26053]: I0318 09:08:03.027980 26053 scope.go:117] "RemoveContainer" containerID="3c4fbfc45d70efcd01458c6d886e63fef0992127787572398e33c8e3ab34c520" Mar 18 09:08:03.028492 master-0 kubenswrapper[26053]: E0318 09:08:03.028467 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c4fbfc45d70efcd01458c6d886e63fef0992127787572398e33c8e3ab34c520\": container with ID starting with 3c4fbfc45d70efcd01458c6d886e63fef0992127787572398e33c8e3ab34c520 not found: ID does not exist" containerID="3c4fbfc45d70efcd01458c6d886e63fef0992127787572398e33c8e3ab34c520" Mar 18 09:08:03.028538 master-0 kubenswrapper[26053]: I0318 09:08:03.028487 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c4fbfc45d70efcd01458c6d886e63fef0992127787572398e33c8e3ab34c520"} err="failed to get container status \"3c4fbfc45d70efcd01458c6d886e63fef0992127787572398e33c8e3ab34c520\": rpc error: code = NotFound desc = could not find container \"3c4fbfc45d70efcd01458c6d886e63fef0992127787572398e33c8e3ab34c520\": container with ID starting with 3c4fbfc45d70efcd01458c6d886e63fef0992127787572398e33c8e3ab34c520 not found: ID does not exist" Mar 18 09:08:03.028538 master-0 kubenswrapper[26053]: I0318 09:08:03.028515 26053 scope.go:117] "RemoveContainer" containerID="fdc009359c2112107085b8f02c63abd75fd4e2176cf53ef7347b4a4fd00d9538" Mar 18 09:08:03.028898 master-0 kubenswrapper[26053]: E0318 09:08:03.028866 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdc009359c2112107085b8f02c63abd75fd4e2176cf53ef7347b4a4fd00d9538\": container with ID starting with fdc009359c2112107085b8f02c63abd75fd4e2176cf53ef7347b4a4fd00d9538 not found: ID does not exist" containerID="fdc009359c2112107085b8f02c63abd75fd4e2176cf53ef7347b4a4fd00d9538" Mar 18 09:08:03.028993 master-0 kubenswrapper[26053]: I0318 09:08:03.028907 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdc009359c2112107085b8f02c63abd75fd4e2176cf53ef7347b4a4fd00d9538"} err="failed to get container status \"fdc009359c2112107085b8f02c63abd75fd4e2176cf53ef7347b4a4fd00d9538\": rpc error: code = NotFound desc = could not find container \"fdc009359c2112107085b8f02c63abd75fd4e2176cf53ef7347b4a4fd00d9538\": container with ID starting with fdc009359c2112107085b8f02c63abd75fd4e2176cf53ef7347b4a4fd00d9538 not found: ID does not exist" Mar 18 09:08:03.028993 master-0 kubenswrapper[26053]: I0318 09:08:03.028921 26053 scope.go:117] "RemoveContainer" containerID="99a0327fcc44e8bda038947a1b4ee1b04fc65e4463d2a3b38bd274ec5429c287" Mar 18 09:08:03.029224 master-0 kubenswrapper[26053]: E0318 09:08:03.029177 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99a0327fcc44e8bda038947a1b4ee1b04fc65e4463d2a3b38bd274ec5429c287\": container with ID starting with 99a0327fcc44e8bda038947a1b4ee1b04fc65e4463d2a3b38bd274ec5429c287 not found: ID does not exist" containerID="99a0327fcc44e8bda038947a1b4ee1b04fc65e4463d2a3b38bd274ec5429c287" Mar 18 09:08:03.029224 master-0 kubenswrapper[26053]: I0318 09:08:03.029200 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99a0327fcc44e8bda038947a1b4ee1b04fc65e4463d2a3b38bd274ec5429c287"} err="failed to get container status \"99a0327fcc44e8bda038947a1b4ee1b04fc65e4463d2a3b38bd274ec5429c287\": rpc error: code = NotFound desc = could not find container \"99a0327fcc44e8bda038947a1b4ee1b04fc65e4463d2a3b38bd274ec5429c287\": container with ID starting with 99a0327fcc44e8bda038947a1b4ee1b04fc65e4463d2a3b38bd274ec5429c287 not found: ID does not exist" Mar 18 09:08:03.029224 master-0 kubenswrapper[26053]: I0318 09:08:03.029212 26053 scope.go:117] "RemoveContainer" containerID="0e945016e5ad3ca612d9c28d9263f33536983ae254a451bec2931d7083e04391" Mar 18 09:08:03.029430 master-0 kubenswrapper[26053]: I0318 09:08:03.029398 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e945016e5ad3ca612d9c28d9263f33536983ae254a451bec2931d7083e04391"} err="failed to get container status \"0e945016e5ad3ca612d9c28d9263f33536983ae254a451bec2931d7083e04391\": rpc error: code = NotFound desc = could not find container \"0e945016e5ad3ca612d9c28d9263f33536983ae254a451bec2931d7083e04391\": container with ID starting with 0e945016e5ad3ca612d9c28d9263f33536983ae254a451bec2931d7083e04391 not found: ID does not exist" Mar 18 09:08:03.029497 master-0 kubenswrapper[26053]: I0318 09:08:03.029432 26053 scope.go:117] "RemoveContainer" containerID="3c4fbfc45d70efcd01458c6d886e63fef0992127787572398e33c8e3ab34c520" Mar 18 09:08:03.029814 master-0 kubenswrapper[26053]: I0318 09:08:03.029779 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c4fbfc45d70efcd01458c6d886e63fef0992127787572398e33c8e3ab34c520"} err="failed to get container status \"3c4fbfc45d70efcd01458c6d886e63fef0992127787572398e33c8e3ab34c520\": rpc error: code = NotFound desc = could not find container \"3c4fbfc45d70efcd01458c6d886e63fef0992127787572398e33c8e3ab34c520\": container with ID starting with 3c4fbfc45d70efcd01458c6d886e63fef0992127787572398e33c8e3ab34c520 not found: ID does not exist" Mar 18 09:08:03.029814 master-0 kubenswrapper[26053]: I0318 09:08:03.029801 26053 scope.go:117] "RemoveContainer" containerID="fdc009359c2112107085b8f02c63abd75fd4e2176cf53ef7347b4a4fd00d9538" Mar 18 09:08:03.030110 master-0 kubenswrapper[26053]: I0318 09:08:03.030055 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdc009359c2112107085b8f02c63abd75fd4e2176cf53ef7347b4a4fd00d9538"} err="failed to get container status \"fdc009359c2112107085b8f02c63abd75fd4e2176cf53ef7347b4a4fd00d9538\": rpc error: code = NotFound desc = could not find container \"fdc009359c2112107085b8f02c63abd75fd4e2176cf53ef7347b4a4fd00d9538\": container with ID starting with fdc009359c2112107085b8f02c63abd75fd4e2176cf53ef7347b4a4fd00d9538 not found: ID does not exist" Mar 18 09:08:03.030110 master-0 kubenswrapper[26053]: I0318 09:08:03.030078 26053 scope.go:117] "RemoveContainer" containerID="99a0327fcc44e8bda038947a1b4ee1b04fc65e4463d2a3b38bd274ec5429c287" Mar 18 09:08:03.030454 master-0 kubenswrapper[26053]: I0318 09:08:03.030378 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99a0327fcc44e8bda038947a1b4ee1b04fc65e4463d2a3b38bd274ec5429c287"} err="failed to get container status \"99a0327fcc44e8bda038947a1b4ee1b04fc65e4463d2a3b38bd274ec5429c287\": rpc error: code = NotFound desc = could not find container \"99a0327fcc44e8bda038947a1b4ee1b04fc65e4463d2a3b38bd274ec5429c287\": container with ID starting with 99a0327fcc44e8bda038947a1b4ee1b04fc65e4463d2a3b38bd274ec5429c287 not found: ID does not exist" Mar 18 09:08:03.030454 master-0 kubenswrapper[26053]: I0318 09:08:03.030447 26053 scope.go:117] "RemoveContainer" containerID="0e945016e5ad3ca612d9c28d9263f33536983ae254a451bec2931d7083e04391" Mar 18 09:08:03.030998 master-0 kubenswrapper[26053]: I0318 09:08:03.030936 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e945016e5ad3ca612d9c28d9263f33536983ae254a451bec2931d7083e04391"} err="failed to get container status \"0e945016e5ad3ca612d9c28d9263f33536983ae254a451bec2931d7083e04391\": rpc error: code = NotFound desc = could not find container \"0e945016e5ad3ca612d9c28d9263f33536983ae254a451bec2931d7083e04391\": container with ID starting with 0e945016e5ad3ca612d9c28d9263f33536983ae254a451bec2931d7083e04391 not found: ID does not exist" Mar 18 09:08:03.030998 master-0 kubenswrapper[26053]: I0318 09:08:03.030993 26053 scope.go:117] "RemoveContainer" containerID="3c4fbfc45d70efcd01458c6d886e63fef0992127787572398e33c8e3ab34c520" Mar 18 09:08:03.031306 master-0 kubenswrapper[26053]: I0318 09:08:03.031264 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c4fbfc45d70efcd01458c6d886e63fef0992127787572398e33c8e3ab34c520"} err="failed to get container status \"3c4fbfc45d70efcd01458c6d886e63fef0992127787572398e33c8e3ab34c520\": rpc error: code = NotFound desc = could not find container \"3c4fbfc45d70efcd01458c6d886e63fef0992127787572398e33c8e3ab34c520\": container with ID starting with 3c4fbfc45d70efcd01458c6d886e63fef0992127787572398e33c8e3ab34c520 not found: ID does not exist" Mar 18 09:08:03.031306 master-0 kubenswrapper[26053]: I0318 09:08:03.031298 26053 scope.go:117] "RemoveContainer" containerID="fdc009359c2112107085b8f02c63abd75fd4e2176cf53ef7347b4a4fd00d9538" Mar 18 09:08:03.031653 master-0 kubenswrapper[26053]: I0318 09:08:03.031623 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdc009359c2112107085b8f02c63abd75fd4e2176cf53ef7347b4a4fd00d9538"} err="failed to get container status \"fdc009359c2112107085b8f02c63abd75fd4e2176cf53ef7347b4a4fd00d9538\": rpc error: code = NotFound desc = could not find container \"fdc009359c2112107085b8f02c63abd75fd4e2176cf53ef7347b4a4fd00d9538\": container with ID starting with fdc009359c2112107085b8f02c63abd75fd4e2176cf53ef7347b4a4fd00d9538 not found: ID does not exist" Mar 18 09:08:03.031653 master-0 kubenswrapper[26053]: I0318 09:08:03.031646 26053 scope.go:117] "RemoveContainer" containerID="99a0327fcc44e8bda038947a1b4ee1b04fc65e4463d2a3b38bd274ec5429c287" Mar 18 09:08:03.031916 master-0 kubenswrapper[26053]: I0318 09:08:03.031887 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99a0327fcc44e8bda038947a1b4ee1b04fc65e4463d2a3b38bd274ec5429c287"} err="failed to get container status \"99a0327fcc44e8bda038947a1b4ee1b04fc65e4463d2a3b38bd274ec5429c287\": rpc error: code = NotFound desc = could not find container \"99a0327fcc44e8bda038947a1b4ee1b04fc65e4463d2a3b38bd274ec5429c287\": container with ID starting with 99a0327fcc44e8bda038947a1b4ee1b04fc65e4463d2a3b38bd274ec5429c287 not found: ID does not exist" Mar 18 09:08:03.729948 master-0 kubenswrapper[26053]: I0318 09:08:03.729883 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:08:03.772508 master-0 kubenswrapper[26053]: I0318 09:08:03.772414 26053 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d401a7a8-8f03-4d55-893d-5cff077d65d9" Mar 18 09:08:03.772508 master-0 kubenswrapper[26053]: I0318 09:08:03.772471 26053 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="d401a7a8-8f03-4d55-893d-5cff077d65d9" Mar 18 09:08:03.788762 master-0 kubenswrapper[26053]: I0318 09:08:03.787910 26053 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:08:03.805366 master-0 kubenswrapper[26053]: I0318 09:08:03.805290 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:08:03.808832 master-0 kubenswrapper[26053]: I0318 09:08:03.808780 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:08:03.820849 master-0 kubenswrapper[26053]: I0318 09:08:03.820795 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:08:03.828233 master-0 kubenswrapper[26053]: I0318 09:08:03.828182 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:08:03.844710 master-0 kubenswrapper[26053]: W0318 09:08:03.844643 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60c2ba061fb7c3edad3900526541ee3c.slice/crio-26be428103a2972549ffaa2401b0e508a5356808a3733a677148921db330d91e WatchSource:0}: Error finding container 26be428103a2972549ffaa2401b0e508a5356808a3733a677148921db330d91e: Status 404 returned error can't find the container with id 26be428103a2972549ffaa2401b0e508a5356808a3733a677148921db330d91e Mar 18 09:08:03.961113 master-0 kubenswrapper[26053]: I0318 09:08:03.961026 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"60c2ba061fb7c3edad3900526541ee3c","Type":"ContainerStarted","Data":"26be428103a2972549ffaa2401b0e508a5356808a3733a677148921db330d91e"} Mar 18 09:08:04.416742 master-0 kubenswrapper[26053]: I0318 09:08:04.415842 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:08:04.516255 master-0 kubenswrapper[26053]: I0318 09:08:04.516205 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/315ae422-1357-4fce-a2f4-eb10aaaaae24-var-lock\") pod \"315ae422-1357-4fce-a2f4-eb10aaaaae24\" (UID: \"315ae422-1357-4fce-a2f4-eb10aaaaae24\") " Mar 18 09:08:04.516255 master-0 kubenswrapper[26053]: I0318 09:08:04.516259 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/315ae422-1357-4fce-a2f4-eb10aaaaae24-kubelet-dir\") pod \"315ae422-1357-4fce-a2f4-eb10aaaaae24\" (UID: \"315ae422-1357-4fce-a2f4-eb10aaaaae24\") " Mar 18 09:08:04.516531 master-0 kubenswrapper[26053]: I0318 09:08:04.516297 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/315ae422-1357-4fce-a2f4-eb10aaaaae24-kube-api-access\") pod \"315ae422-1357-4fce-a2f4-eb10aaaaae24\" (UID: \"315ae422-1357-4fce-a2f4-eb10aaaaae24\") " Mar 18 09:08:04.516898 master-0 kubenswrapper[26053]: I0318 09:08:04.516870 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/315ae422-1357-4fce-a2f4-eb10aaaaae24-var-lock" (OuterVolumeSpecName: "var-lock") pod "315ae422-1357-4fce-a2f4-eb10aaaaae24" (UID: "315ae422-1357-4fce-a2f4-eb10aaaaae24"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:08:04.517001 master-0 kubenswrapper[26053]: I0318 09:08:04.516951 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/315ae422-1357-4fce-a2f4-eb10aaaaae24-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "315ae422-1357-4fce-a2f4-eb10aaaaae24" (UID: "315ae422-1357-4fce-a2f4-eb10aaaaae24"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:08:04.519855 master-0 kubenswrapper[26053]: I0318 09:08:04.519792 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/315ae422-1357-4fce-a2f4-eb10aaaaae24-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "315ae422-1357-4fce-a2f4-eb10aaaaae24" (UID: "315ae422-1357-4fce-a2f4-eb10aaaaae24"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:08:04.617486 master-0 kubenswrapper[26053]: I0318 09:08:04.617423 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/315ae422-1357-4fce-a2f4-eb10aaaaae24-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:08:04.617486 master-0 kubenswrapper[26053]: I0318 09:08:04.617474 26053 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/315ae422-1357-4fce-a2f4-eb10aaaaae24-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:08:04.617486 master-0 kubenswrapper[26053]: I0318 09:08:04.617487 26053 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/315ae422-1357-4fce-a2f4-eb10aaaaae24-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:08:04.971940 master-0 kubenswrapper[26053]: I0318 09:08:04.971871 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"60c2ba061fb7c3edad3900526541ee3c","Type":"ContainerStarted","Data":"d679a1297cafaf3badf630142366c09d03cda0e9cd66b05fa66aef0604da0f46"} Mar 18 09:08:04.971940 master-0 kubenswrapper[26053]: I0318 09:08:04.971921 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"60c2ba061fb7c3edad3900526541ee3c","Type":"ContainerStarted","Data":"2eb70f844dc859b9c27f10b4a002866192e0ad65ec1b06f30aaa34b77fb0b7f9"} Mar 18 09:08:04.971940 master-0 kubenswrapper[26053]: I0318 09:08:04.971931 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"60c2ba061fb7c3edad3900526541ee3c","Type":"ContainerStarted","Data":"95b80d622ddf2ed768357e028eaa3eb8c0cdb8ebe103e34d7e2c03682a426f65"} Mar 18 09:08:04.971940 master-0 kubenswrapper[26053]: I0318 09:08:04.971941 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"60c2ba061fb7c3edad3900526541ee3c","Type":"ContainerStarted","Data":"ed8bdc24b42ed8397f238b0c55ea4555545fbf502b6a47a78f76d63cdd9cc08f"} Mar 18 09:08:04.979601 master-0 kubenswrapper[26053]: I0318 09:08:04.976277 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"315ae422-1357-4fce-a2f4-eb10aaaaae24","Type":"ContainerDied","Data":"bf3a80cdf9125d0b266a8f72ca246c84551d22148fcba12a993ec6103b376d7a"} Mar 18 09:08:04.979601 master-0 kubenswrapper[26053]: I0318 09:08:04.976309 26053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf3a80cdf9125d0b266a8f72ca246c84551d22148fcba12a993ec6103b376d7a" Mar 18 09:08:04.979601 master-0 kubenswrapper[26053]: I0318 09:08:04.976361 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 18 09:08:05.009530 master-0 kubenswrapper[26053]: I0318 09:08:05.009422 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.009391855 podStartE2EDuration="2.009391855s" podCreationTimestamp="2026-03-18 09:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:08:04.999071508 +0000 UTC m=+272.492422889" watchObservedRunningTime="2026-03-18 09:08:05.009391855 +0000 UTC m=+272.502743256" Mar 18 09:08:10.130012 master-0 kubenswrapper[26053]: I0318 09:08:10.129939 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7748c6b99d-fkjm5" podUID="6a611129-8d70-4618-8512-8e0a3491353e" containerName="console" containerID="cri-o://b9d58a75817e6af4f01b4e3ee5e5f66dccf84ea71c0c5c545d67e0ab6e4908fc" gracePeriod=15 Mar 18 09:08:10.279463 master-0 kubenswrapper[26053]: I0318 09:08:10.279393 26053 patch_prober.go:28] interesting pod/console-7748c6b99d-fkjm5 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 18 09:08:10.279719 master-0 kubenswrapper[26053]: I0318 09:08:10.279469 26053 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-7748c6b99d-fkjm5" podUID="6a611129-8d70-4618-8512-8e0a3491353e" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 18 09:08:10.692613 master-0 kubenswrapper[26053]: I0318 09:08:10.691472 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7748c6b99d-fkjm5_6a611129-8d70-4618-8512-8e0a3491353e/console/0.log" Mar 18 09:08:10.692613 master-0 kubenswrapper[26053]: I0318 09:08:10.691545 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:08:10.812023 master-0 kubenswrapper[26053]: I0318 09:08:10.811936 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a611129-8d70-4618-8512-8e0a3491353e-oauth-serving-cert\") pod \"6a611129-8d70-4618-8512-8e0a3491353e\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " Mar 18 09:08:10.812366 master-0 kubenswrapper[26053]: I0318 09:08:10.812089 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a611129-8d70-4618-8512-8e0a3491353e-console-oauth-config\") pod \"6a611129-8d70-4618-8512-8e0a3491353e\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " Mar 18 09:08:10.812366 master-0 kubenswrapper[26053]: I0318 09:08:10.812136 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a611129-8d70-4618-8512-8e0a3491353e-console-serving-cert\") pod \"6a611129-8d70-4618-8512-8e0a3491353e\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " Mar 18 09:08:10.812366 master-0 kubenswrapper[26053]: I0318 09:08:10.812221 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a611129-8d70-4618-8512-8e0a3491353e-service-ca\") pod \"6a611129-8d70-4618-8512-8e0a3491353e\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " Mar 18 09:08:10.812366 master-0 kubenswrapper[26053]: I0318 09:08:10.812295 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a611129-8d70-4618-8512-8e0a3491353e-trusted-ca-bundle\") pod \"6a611129-8d70-4618-8512-8e0a3491353e\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " Mar 18 09:08:10.812366 master-0 kubenswrapper[26053]: I0318 09:08:10.812344 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a611129-8d70-4618-8512-8e0a3491353e-console-config\") pod \"6a611129-8d70-4618-8512-8e0a3491353e\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " Mar 18 09:08:10.812888 master-0 kubenswrapper[26053]: I0318 09:08:10.812388 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc7f8\" (UniqueName: \"kubernetes.io/projected/6a611129-8d70-4618-8512-8e0a3491353e-kube-api-access-wc7f8\") pod \"6a611129-8d70-4618-8512-8e0a3491353e\" (UID: \"6a611129-8d70-4618-8512-8e0a3491353e\") " Mar 18 09:08:10.813002 master-0 kubenswrapper[26053]: I0318 09:08:10.812944 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a611129-8d70-4618-8512-8e0a3491353e-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a611129-8d70-4618-8512-8e0a3491353e" (UID: "6a611129-8d70-4618-8512-8e0a3491353e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:08:10.813384 master-0 kubenswrapper[26053]: I0318 09:08:10.813332 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a611129-8d70-4618-8512-8e0a3491353e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a611129-8d70-4618-8512-8e0a3491353e" (UID: "6a611129-8d70-4618-8512-8e0a3491353e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:08:10.813679 master-0 kubenswrapper[26053]: I0318 09:08:10.813460 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a611129-8d70-4618-8512-8e0a3491353e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a611129-8d70-4618-8512-8e0a3491353e" (UID: "6a611129-8d70-4618-8512-8e0a3491353e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:08:10.813679 master-0 kubenswrapper[26053]: I0318 09:08:10.813648 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a611129-8d70-4618-8512-8e0a3491353e-console-config" (OuterVolumeSpecName: "console-config") pod "6a611129-8d70-4618-8512-8e0a3491353e" (UID: "6a611129-8d70-4618-8512-8e0a3491353e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:08:10.815102 master-0 kubenswrapper[26053]: I0318 09:08:10.815050 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a611129-8d70-4618-8512-8e0a3491353e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a611129-8d70-4618-8512-8e0a3491353e" (UID: "6a611129-8d70-4618-8512-8e0a3491353e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:08:10.817736 master-0 kubenswrapper[26053]: I0318 09:08:10.817648 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a611129-8d70-4618-8512-8e0a3491353e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a611129-8d70-4618-8512-8e0a3491353e" (UID: "6a611129-8d70-4618-8512-8e0a3491353e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:08:10.818829 master-0 kubenswrapper[26053]: I0318 09:08:10.818759 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a611129-8d70-4618-8512-8e0a3491353e-kube-api-access-wc7f8" (OuterVolumeSpecName: "kube-api-access-wc7f8") pod "6a611129-8d70-4618-8512-8e0a3491353e" (UID: "6a611129-8d70-4618-8512-8e0a3491353e"). InnerVolumeSpecName "kube-api-access-wc7f8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:08:10.914715 master-0 kubenswrapper[26053]: I0318 09:08:10.914551 26053 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a611129-8d70-4618-8512-8e0a3491353e-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:08:10.914715 master-0 kubenswrapper[26053]: I0318 09:08:10.914639 26053 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a611129-8d70-4618-8512-8e0a3491353e-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 09:08:10.914715 master-0 kubenswrapper[26053]: I0318 09:08:10.914663 26053 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a611129-8d70-4618-8512-8e0a3491353e-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:08:10.914715 master-0 kubenswrapper[26053]: I0318 09:08:10.914688 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc7f8\" (UniqueName: \"kubernetes.io/projected/6a611129-8d70-4618-8512-8e0a3491353e-kube-api-access-wc7f8\") on node \"master-0\" DevicePath \"\"" Mar 18 09:08:10.914715 master-0 kubenswrapper[26053]: I0318 09:08:10.914708 26053 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a611129-8d70-4618-8512-8e0a3491353e-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:08:10.915048 master-0 kubenswrapper[26053]: I0318 09:08:10.914729 26053 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a611129-8d70-4618-8512-8e0a3491353e-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:08:10.915048 master-0 kubenswrapper[26053]: I0318 09:08:10.914748 26053 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a611129-8d70-4618-8512-8e0a3491353e-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:08:11.050947 master-0 kubenswrapper[26053]: I0318 09:08:11.050858 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7748c6b99d-fkjm5_6a611129-8d70-4618-8512-8e0a3491353e/console/0.log" Mar 18 09:08:11.050947 master-0 kubenswrapper[26053]: I0318 09:08:11.050940 26053 generic.go:334] "Generic (PLEG): container finished" podID="6a611129-8d70-4618-8512-8e0a3491353e" containerID="b9d58a75817e6af4f01b4e3ee5e5f66dccf84ea71c0c5c545d67e0ab6e4908fc" exitCode=2 Mar 18 09:08:11.051360 master-0 kubenswrapper[26053]: I0318 09:08:11.050981 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7748c6b99d-fkjm5" event={"ID":"6a611129-8d70-4618-8512-8e0a3491353e","Type":"ContainerDied","Data":"b9d58a75817e6af4f01b4e3ee5e5f66dccf84ea71c0c5c545d67e0ab6e4908fc"} Mar 18 09:08:11.051360 master-0 kubenswrapper[26053]: I0318 09:08:11.051016 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7748c6b99d-fkjm5" event={"ID":"6a611129-8d70-4618-8512-8e0a3491353e","Type":"ContainerDied","Data":"008295768abec88e92581978488e6f15584ca83dc897a756588e8c22ad9deff9"} Mar 18 09:08:11.051360 master-0 kubenswrapper[26053]: I0318 09:08:11.051034 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7748c6b99d-fkjm5" Mar 18 09:08:11.051736 master-0 kubenswrapper[26053]: I0318 09:08:11.051042 26053 scope.go:117] "RemoveContainer" containerID="b9d58a75817e6af4f01b4e3ee5e5f66dccf84ea71c0c5c545d67e0ab6e4908fc" Mar 18 09:08:11.103671 master-0 kubenswrapper[26053]: I0318 09:08:11.103608 26053 scope.go:117] "RemoveContainer" containerID="b9d58a75817e6af4f01b4e3ee5e5f66dccf84ea71c0c5c545d67e0ab6e4908fc" Mar 18 09:08:11.105018 master-0 kubenswrapper[26053]: E0318 09:08:11.104948 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9d58a75817e6af4f01b4e3ee5e5f66dccf84ea71c0c5c545d67e0ab6e4908fc\": container with ID starting with b9d58a75817e6af4f01b4e3ee5e5f66dccf84ea71c0c5c545d67e0ab6e4908fc not found: ID does not exist" containerID="b9d58a75817e6af4f01b4e3ee5e5f66dccf84ea71c0c5c545d67e0ab6e4908fc" Mar 18 09:08:11.105137 master-0 kubenswrapper[26053]: I0318 09:08:11.105006 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9d58a75817e6af4f01b4e3ee5e5f66dccf84ea71c0c5c545d67e0ab6e4908fc"} err="failed to get container status \"b9d58a75817e6af4f01b4e3ee5e5f66dccf84ea71c0c5c545d67e0ab6e4908fc\": rpc error: code = NotFound desc = could not find container \"b9d58a75817e6af4f01b4e3ee5e5f66dccf84ea71c0c5c545d67e0ab6e4908fc\": container with ID starting with b9d58a75817e6af4f01b4e3ee5e5f66dccf84ea71c0c5c545d67e0ab6e4908fc not found: ID does not exist" Mar 18 09:08:11.121297 master-0 kubenswrapper[26053]: I0318 09:08:11.121219 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7748c6b99d-fkjm5"] Mar 18 09:08:11.130162 master-0 kubenswrapper[26053]: I0318 09:08:11.130097 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7748c6b99d-fkjm5"] Mar 18 09:08:12.748377 master-0 kubenswrapper[26053]: I0318 09:08:12.748315 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a611129-8d70-4618-8512-8e0a3491353e" path="/var/lib/kubelet/pods/6a611129-8d70-4618-8512-8e0a3491353e/volumes" Mar 18 09:08:13.729715 master-0 kubenswrapper[26053]: I0318 09:08:13.729500 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:08:13.765196 master-0 kubenswrapper[26053]: I0318 09:08:13.765113 26053 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c1a49c56-dfc5-4837-984d-546fd41485f9" Mar 18 09:08:13.765196 master-0 kubenswrapper[26053]: I0318 09:08:13.765163 26053 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="c1a49c56-dfc5-4837-984d-546fd41485f9" Mar 18 09:08:13.787709 master-0 kubenswrapper[26053]: I0318 09:08:13.784799 26053 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:08:13.792910 master-0 kubenswrapper[26053]: I0318 09:08:13.792856 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 09:08:13.800363 master-0 kubenswrapper[26053]: I0318 09:08:13.800296 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:08:13.806660 master-0 kubenswrapper[26053]: I0318 09:08:13.806601 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 09:08:13.809384 master-0 kubenswrapper[26053]: I0318 09:08:13.809310 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:08:13.809671 master-0 kubenswrapper[26053]: I0318 09:08:13.809551 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:08:13.810130 master-0 kubenswrapper[26053]: I0318 09:08:13.810109 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:08:13.810260 master-0 kubenswrapper[26053]: I0318 09:08:13.810243 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:08:13.815076 master-0 kubenswrapper[26053]: I0318 09:08:13.815014 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:08:13.815196 master-0 kubenswrapper[26053]: I0318 09:08:13.815176 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:08:13.818059 master-0 kubenswrapper[26053]: I0318 09:08:13.818012 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 18 09:08:13.842650 master-0 kubenswrapper[26053]: W0318 09:08:13.841712 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e27b7d086edf5d2cf47b703574641d8.slice/crio-0f0a7ecc22cd139f538e7a9d61546f91664217c857718a756404f86da2e03f26 WatchSource:0}: Error finding container 0f0a7ecc22cd139f538e7a9d61546f91664217c857718a756404f86da2e03f26: Status 404 returned error can't find the container with id 0f0a7ecc22cd139f538e7a9d61546f91664217c857718a756404f86da2e03f26 Mar 18 09:08:14.082611 master-0 kubenswrapper[26053]: I0318 09:08:14.082543 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"4fe15d27be9d7b1ca592c61e0b2271c838f58c06979ad78c2de366cfd1135c3c"} Mar 18 09:08:14.082611 master-0 kubenswrapper[26053]: I0318 09:08:14.082606 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"0f0a7ecc22cd139f538e7a9d61546f91664217c857718a756404f86da2e03f26"} Mar 18 09:08:14.088329 master-0 kubenswrapper[26053]: I0318 09:08:14.088273 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:08:14.088825 master-0 kubenswrapper[26053]: I0318 09:08:14.088783 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:08:18.790716 master-0 kubenswrapper[26053]: I0318 09:08:18.790668 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_4c36ffe6-e550-400e-9cf5-883d543fbb05/installer/0.log" Mar 18 09:08:18.793064 master-0 kubenswrapper[26053]: I0318 09:08:18.790734 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:08:18.947492 master-0 kubenswrapper[26053]: I0318 09:08:18.947425 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c36ffe6-e550-400e-9cf5-883d543fbb05-kube-api-access\") pod \"4c36ffe6-e550-400e-9cf5-883d543fbb05\" (UID: \"4c36ffe6-e550-400e-9cf5-883d543fbb05\") " Mar 18 09:08:18.947728 master-0 kubenswrapper[26053]: I0318 09:08:18.947595 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4c36ffe6-e550-400e-9cf5-883d543fbb05-var-lock\") pod \"4c36ffe6-e550-400e-9cf5-883d543fbb05\" (UID: \"4c36ffe6-e550-400e-9cf5-883d543fbb05\") " Mar 18 09:08:18.947728 master-0 kubenswrapper[26053]: I0318 09:08:18.947653 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c36ffe6-e550-400e-9cf5-883d543fbb05-kubelet-dir\") pod \"4c36ffe6-e550-400e-9cf5-883d543fbb05\" (UID: \"4c36ffe6-e550-400e-9cf5-883d543fbb05\") " Mar 18 09:08:18.947888 master-0 kubenswrapper[26053]: I0318 09:08:18.947820 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c36ffe6-e550-400e-9cf5-883d543fbb05-var-lock" (OuterVolumeSpecName: "var-lock") pod "4c36ffe6-e550-400e-9cf5-883d543fbb05" (UID: "4c36ffe6-e550-400e-9cf5-883d543fbb05"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:08:18.947943 master-0 kubenswrapper[26053]: I0318 09:08:18.947920 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c36ffe6-e550-400e-9cf5-883d543fbb05-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4c36ffe6-e550-400e-9cf5-883d543fbb05" (UID: "4c36ffe6-e550-400e-9cf5-883d543fbb05"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:08:18.948291 master-0 kubenswrapper[26053]: I0318 09:08:18.948257 26053 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4c36ffe6-e550-400e-9cf5-883d543fbb05-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:08:18.948291 master-0 kubenswrapper[26053]: I0318 09:08:18.948283 26053 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4c36ffe6-e550-400e-9cf5-883d543fbb05-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:08:18.952757 master-0 kubenswrapper[26053]: I0318 09:08:18.952737 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c36ffe6-e550-400e-9cf5-883d543fbb05-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4c36ffe6-e550-400e-9cf5-883d543fbb05" (UID: "4c36ffe6-e550-400e-9cf5-883d543fbb05"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:08:19.050199 master-0 kubenswrapper[26053]: I0318 09:08:19.050077 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c36ffe6-e550-400e-9cf5-883d543fbb05-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:08:19.127097 master-0 kubenswrapper[26053]: I0318 09:08:19.127063 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_4c36ffe6-e550-400e-9cf5-883d543fbb05/installer/0.log" Mar 18 09:08:19.127377 master-0 kubenswrapper[26053]: I0318 09:08:19.127350 26053 generic.go:334] "Generic (PLEG): container finished" podID="4c36ffe6-e550-400e-9cf5-883d543fbb05" containerID="bea8d394712dceb969d9b7f07a209d4a66be22db6c1ba8b611f9661d6abcda20" exitCode=1 Mar 18 09:08:19.127488 master-0 kubenswrapper[26053]: I0318 09:08:19.127447 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 18 09:08:19.127665 master-0 kubenswrapper[26053]: I0318 09:08:19.127429 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"4c36ffe6-e550-400e-9cf5-883d543fbb05","Type":"ContainerDied","Data":"bea8d394712dceb969d9b7f07a209d4a66be22db6c1ba8b611f9661d6abcda20"} Mar 18 09:08:19.127737 master-0 kubenswrapper[26053]: I0318 09:08:19.127692 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"4c36ffe6-e550-400e-9cf5-883d543fbb05","Type":"ContainerDied","Data":"048a506e1de498dee7fc7b995c202c0841d636d426b490b1c873c2fc0cb74148"} Mar 18 09:08:19.127737 master-0 kubenswrapper[26053]: I0318 09:08:19.127725 26053 scope.go:117] "RemoveContainer" containerID="bea8d394712dceb969d9b7f07a209d4a66be22db6c1ba8b611f9661d6abcda20" Mar 18 09:08:19.153264 master-0 kubenswrapper[26053]: I0318 09:08:19.153232 26053 scope.go:117] "RemoveContainer" containerID="bea8d394712dceb969d9b7f07a209d4a66be22db6c1ba8b611f9661d6abcda20" Mar 18 09:08:19.153879 master-0 kubenswrapper[26053]: E0318 09:08:19.153837 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bea8d394712dceb969d9b7f07a209d4a66be22db6c1ba8b611f9661d6abcda20\": container with ID starting with bea8d394712dceb969d9b7f07a209d4a66be22db6c1ba8b611f9661d6abcda20 not found: ID does not exist" containerID="bea8d394712dceb969d9b7f07a209d4a66be22db6c1ba8b611f9661d6abcda20" Mar 18 09:08:19.153956 master-0 kubenswrapper[26053]: I0318 09:08:19.153878 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bea8d394712dceb969d9b7f07a209d4a66be22db6c1ba8b611f9661d6abcda20"} err="failed to get container status \"bea8d394712dceb969d9b7f07a209d4a66be22db6c1ba8b611f9661d6abcda20\": rpc error: code = NotFound desc = could not find container \"bea8d394712dceb969d9b7f07a209d4a66be22db6c1ba8b611f9661d6abcda20\": container with ID starting with bea8d394712dceb969d9b7f07a209d4a66be22db6c1ba8b611f9661d6abcda20 not found: ID does not exist" Mar 18 09:08:19.189217 master-0 kubenswrapper[26053]: I0318 09:08:19.189133 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 09:08:19.202895 master-0 kubenswrapper[26053]: I0318 09:08:19.202810 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 18 09:08:20.744180 master-0 kubenswrapper[26053]: I0318 09:08:20.744101 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c36ffe6-e550-400e-9cf5-883d543fbb05" path="/var/lib/kubelet/pods/4c36ffe6-e550-400e-9cf5-883d543fbb05/volumes" Mar 18 09:08:32.711315 master-0 kubenswrapper[26053]: I0318 09:08:32.711186 26053 kubelet.go:1505] "Image garbage collection succeeded" Mar 18 09:08:44.331371 master-0 kubenswrapper[26053]: I0318 09:08:44.331238 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8e27b7d086edf5d2cf47b703574641d8/wait-for-host-port/0.log" Mar 18 09:08:44.331371 master-0 kubenswrapper[26053]: I0318 09:08:44.331300 26053 generic.go:334] "Generic (PLEG): container finished" podID="8e27b7d086edf5d2cf47b703574641d8" containerID="4fe15d27be9d7b1ca592c61e0b2271c838f58c06979ad78c2de366cfd1135c3c" exitCode=124 Mar 18 09:08:44.332292 master-0 kubenswrapper[26053]: I0318 09:08:44.331344 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerDied","Data":"4fe15d27be9d7b1ca592c61e0b2271c838f58c06979ad78c2de366cfd1135c3c"} Mar 18 09:08:45.340317 master-0 kubenswrapper[26053]: I0318 09:08:45.340273 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8e27b7d086edf5d2cf47b703574641d8/wait-for-host-port/0.log" Mar 18 09:08:45.341042 master-0 kubenswrapper[26053]: I0318 09:08:45.341012 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"24ebc865e0823dedadea33e4a1b0821ba32793cc7305dec14538b0dcad601784"} Mar 18 09:08:51.395712 master-0 kubenswrapper[26053]: I0318 09:08:51.395557 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8e27b7d086edf5d2cf47b703574641d8/wait-for-host-port/0.log" Mar 18 09:08:51.396327 master-0 kubenswrapper[26053]: I0318 09:08:51.395755 26053 generic.go:334] "Generic (PLEG): container finished" podID="8e27b7d086edf5d2cf47b703574641d8" containerID="24ebc865e0823dedadea33e4a1b0821ba32793cc7305dec14538b0dcad601784" exitCode=0 Mar 18 09:08:51.396327 master-0 kubenswrapper[26053]: I0318 09:08:51.395812 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerDied","Data":"24ebc865e0823dedadea33e4a1b0821ba32793cc7305dec14538b0dcad601784"} Mar 18 09:08:51.396327 master-0 kubenswrapper[26053]: I0318 09:08:51.395876 26053 scope.go:117] "RemoveContainer" containerID="4fe15d27be9d7b1ca592c61e0b2271c838f58c06979ad78c2de366cfd1135c3c" Mar 18 09:08:51.396810 master-0 kubenswrapper[26053]: I0318 09:08:51.396755 26053 scope.go:117] "RemoveContainer" containerID="4fe15d27be9d7b1ca592c61e0b2271c838f58c06979ad78c2de366cfd1135c3c" Mar 18 09:08:51.413375 master-0 kubenswrapper[26053]: E0318 09:08:51.413304 26053 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_wait-for-host-port_openshift-kube-scheduler-master-0_openshift-kube-scheduler_8e27b7d086edf5d2cf47b703574641d8_0 in pod sandbox 0f0a7ecc22cd139f538e7a9d61546f91664217c857718a756404f86da2e03f26 from index: no such id: '4fe15d27be9d7b1ca592c61e0b2271c838f58c06979ad78c2de366cfd1135c3c'" containerID="4fe15d27be9d7b1ca592c61e0b2271c838f58c06979ad78c2de366cfd1135c3c" Mar 18 09:08:51.413466 master-0 kubenswrapper[26053]: E0318 09:08:51.413402 26053 kuberuntime_container.go:896] "Unhandled Error" err="failed to remove pod init container \"wait-for-host-port\": rpc error: code = Unknown desc = failed to delete container k8s_wait-for-host-port_openshift-kube-scheduler-master-0_openshift-kube-scheduler_8e27b7d086edf5d2cf47b703574641d8_0 in pod sandbox 0f0a7ecc22cd139f538e7a9d61546f91664217c857718a756404f86da2e03f26 from index: no such id: '4fe15d27be9d7b1ca592c61e0b2271c838f58c06979ad78c2de366cfd1135c3c'; Skipping pod \"openshift-kube-scheduler-master-0_openshift-kube-scheduler(8e27b7d086edf5d2cf47b703574641d8)\"" logger="UnhandledError" Mar 18 09:08:52.240000 master-0 kubenswrapper[26053]: I0318 09:08:52.239954 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 09:08:52.240629 master-0 kubenswrapper[26053]: E0318 09:08:52.240608 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a611129-8d70-4618-8512-8e0a3491353e" containerName="console" Mar 18 09:08:52.240734 master-0 kubenswrapper[26053]: I0318 09:08:52.240719 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a611129-8d70-4618-8512-8e0a3491353e" containerName="console" Mar 18 09:08:52.240837 master-0 kubenswrapper[26053]: E0318 09:08:52.240821 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c36ffe6-e550-400e-9cf5-883d543fbb05" containerName="installer" Mar 18 09:08:52.240916 master-0 kubenswrapper[26053]: I0318 09:08:52.240903 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c36ffe6-e550-400e-9cf5-883d543fbb05" containerName="installer" Mar 18 09:08:52.241001 master-0 kubenswrapper[26053]: E0318 09:08:52.240987 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="315ae422-1357-4fce-a2f4-eb10aaaaae24" containerName="installer" Mar 18 09:08:52.241084 master-0 kubenswrapper[26053]: I0318 09:08:52.241071 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="315ae422-1357-4fce-a2f4-eb10aaaaae24" containerName="installer" Mar 18 09:08:52.241324 master-0 kubenswrapper[26053]: I0318 09:08:52.241307 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a611129-8d70-4618-8512-8e0a3491353e" containerName="console" Mar 18 09:08:52.241419 master-0 kubenswrapper[26053]: I0318 09:08:52.241405 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c36ffe6-e550-400e-9cf5-883d543fbb05" containerName="installer" Mar 18 09:08:52.241535 master-0 kubenswrapper[26053]: I0318 09:08:52.241521 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="315ae422-1357-4fce-a2f4-eb10aaaaae24" containerName="installer" Mar 18 09:08:52.243918 master-0 kubenswrapper[26053]: I0318 09:08:52.243868 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.259800 master-0 kubenswrapper[26053]: I0318 09:08:52.258172 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 18 09:08:52.259800 master-0 kubenswrapper[26053]: I0318 09:08:52.258718 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 18 09:08:52.259800 master-0 kubenswrapper[26053]: I0318 09:08:52.259132 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 18 09:08:52.259800 master-0 kubenswrapper[26053]: I0318 09:08:52.259177 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 18 09:08:52.259800 master-0 kubenswrapper[26053]: I0318 09:08:52.259620 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-6tjc5u8lektnd" Mar 18 09:08:52.259800 master-0 kubenswrapper[26053]: I0318 09:08:52.259201 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 18 09:08:52.262971 master-0 kubenswrapper[26053]: I0318 09:08:52.262168 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 18 09:08:52.288376 master-0 kubenswrapper[26053]: I0318 09:08:52.288325 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5"] Mar 18 09:08:52.293585 master-0 kubenswrapper[26053]: I0318 09:08:52.289666 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.293585 master-0 kubenswrapper[26053]: I0318 09:08:52.292558 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 18 09:08:52.293585 master-0 kubenswrapper[26053]: I0318 09:08:52.292719 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 18 09:08:52.301132 master-0 kubenswrapper[26053]: I0318 09:08:52.295840 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 18 09:08:52.301132 master-0 kubenswrapper[26053]: I0318 09:08:52.296266 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 18 09:08:52.301132 master-0 kubenswrapper[26053]: I0318 09:08:52.298405 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 18 09:08:52.301132 master-0 kubenswrapper[26053]: I0318 09:08:52.298523 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 18 09:08:52.301132 master-0 kubenswrapper[26053]: I0318 09:08:52.298696 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 18 09:08:52.301132 master-0 kubenswrapper[26053]: I0318 09:08:52.299011 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 18 09:08:52.301132 master-0 kubenswrapper[26053]: I0318 09:08:52.300044 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 18 09:08:52.306658 master-0 kubenswrapper[26053]: I0318 09:08:52.304685 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 18 09:08:52.309863 master-0 kubenswrapper[26053]: I0318 09:08:52.309820 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/db4437ea-0a1e-478b-a9fe-a06c182f83a1-secret-telemeter-client\") pod \"telemeter-client-55b7f8bbf6-nj5q5\" (UID: \"db4437ea-0a1e-478b-a9fe-a06c182f83a1\") " pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.309928 master-0 kubenswrapper[26053]: I0318 09:08:52.309864 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.309928 master-0 kubenswrapper[26053]: I0318 09:08:52.309886 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.309928 master-0 kubenswrapper[26053]: I0318 09:08:52.309905 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwvdq\" (UniqueName: \"kubernetes.io/projected/2e6ee2ab-ba60-4663-90ab-10035e03107a-kube-api-access-bwvdq\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.309928 master-0 kubenswrapper[26053]: I0318 09:08:52.309921 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/db4437ea-0a1e-478b-a9fe-a06c182f83a1-metrics-client-ca\") pod \"telemeter-client-55b7f8bbf6-nj5q5\" (UID: \"db4437ea-0a1e-478b-a9fe-a06c182f83a1\") " pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.310049 master-0 kubenswrapper[26053]: I0318 09:08:52.309936 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e6ee2ab-ba60-4663-90ab-10035e03107a-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.310049 master-0 kubenswrapper[26053]: I0318 09:08:52.309952 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-web-config\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.310049 master-0 kubenswrapper[26053]: I0318 09:08:52.309981 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2e6ee2ab-ba60-4663-90ab-10035e03107a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.310049 master-0 kubenswrapper[26053]: I0318 09:08:52.309998 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.310049 master-0 kubenswrapper[26053]: I0318 09:08:52.310017 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.310049 master-0 kubenswrapper[26053]: I0318 09:08:52.310031 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/2e6ee2ab-ba60-4663-90ab-10035e03107a-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.310049 master-0 kubenswrapper[26053]: I0318 09:08:52.310046 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdlrs\" (UniqueName: \"kubernetes.io/projected/db4437ea-0a1e-478b-a9fe-a06c182f83a1-kube-api-access-qdlrs\") pod \"telemeter-client-55b7f8bbf6-nj5q5\" (UID: \"db4437ea-0a1e-478b-a9fe-a06c182f83a1\") " pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.310253 master-0 kubenswrapper[26053]: I0318 09:08:52.310065 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2e6ee2ab-ba60-4663-90ab-10035e03107a-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.310253 master-0 kubenswrapper[26053]: I0318 09:08:52.310082 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.310253 master-0 kubenswrapper[26053]: I0318 09:08:52.310103 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e6ee2ab-ba60-4663-90ab-10035e03107a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.310253 master-0 kubenswrapper[26053]: I0318 09:08:52.310117 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2e6ee2ab-ba60-4663-90ab-10035e03107a-config-out\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.310253 master-0 kubenswrapper[26053]: I0318 09:08:52.310134 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db4437ea-0a1e-478b-a9fe-a06c182f83a1-telemeter-trusted-ca-bundle\") pod \"telemeter-client-55b7f8bbf6-nj5q5\" (UID: \"db4437ea-0a1e-478b-a9fe-a06c182f83a1\") " pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.310253 master-0 kubenswrapper[26053]: I0318 09:08:52.310148 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e6ee2ab-ba60-4663-90ab-10035e03107a-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.310253 master-0 kubenswrapper[26053]: I0318 09:08:52.310167 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e6ee2ab-ba60-4663-90ab-10035e03107a-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.310253 master-0 kubenswrapper[26053]: I0318 09:08:52.310188 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db4437ea-0a1e-478b-a9fe-a06c182f83a1-serving-certs-ca-bundle\") pod \"telemeter-client-55b7f8bbf6-nj5q5\" (UID: \"db4437ea-0a1e-478b-a9fe-a06c182f83a1\") " pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.310253 master-0 kubenswrapper[26053]: I0318 09:08:52.310206 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/db4437ea-0a1e-478b-a9fe-a06c182f83a1-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-55b7f8bbf6-nj5q5\" (UID: \"db4437ea-0a1e-478b-a9fe-a06c182f83a1\") " pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.310253 master-0 kubenswrapper[26053]: I0318 09:08:52.310226 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.310253 master-0 kubenswrapper[26053]: I0318 09:08:52.310240 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/db4437ea-0a1e-478b-a9fe-a06c182f83a1-telemeter-client-tls\") pod \"telemeter-client-55b7f8bbf6-nj5q5\" (UID: \"db4437ea-0a1e-478b-a9fe-a06c182f83a1\") " pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.310253 master-0 kubenswrapper[26053]: I0318 09:08:52.310254 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-config\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.310598 master-0 kubenswrapper[26053]: I0318 09:08:52.310271 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/db4437ea-0a1e-478b-a9fe-a06c182f83a1-federate-client-tls\") pod \"telemeter-client-55b7f8bbf6-nj5q5\" (UID: \"db4437ea-0a1e-478b-a9fe-a06c182f83a1\") " pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.310598 master-0 kubenswrapper[26053]: I0318 09:08:52.310298 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.310996 master-0 kubenswrapper[26053]: I0318 09:08:52.310971 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 18 09:08:52.311167 master-0 kubenswrapper[26053]: I0318 09:08:52.311106 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg"] Mar 18 09:08:52.313612 master-0 kubenswrapper[26053]: I0318 09:08:52.313582 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.316210 master-0 kubenswrapper[26053]: I0318 09:08:52.316168 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 18 09:08:52.316361 master-0 kubenswrapper[26053]: I0318 09:08:52.316339 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 18 09:08:52.316414 master-0 kubenswrapper[26053]: I0318 09:08:52.316391 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 18 09:08:52.316489 master-0 kubenswrapper[26053]: I0318 09:08:52.316346 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-9ugglqvgh687f" Mar 18 09:08:52.316544 master-0 kubenswrapper[26053]: I0318 09:08:52.316189 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 18 09:08:52.316669 master-0 kubenswrapper[26053]: I0318 09:08:52.316642 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 18 09:08:52.321830 master-0 kubenswrapper[26053]: I0318 09:08:52.321789 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 09:08:52.324733 master-0 kubenswrapper[26053]: I0318 09:08:52.324695 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.331885 master-0 kubenswrapper[26053]: I0318 09:08:52.329114 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 18 09:08:52.331885 master-0 kubenswrapper[26053]: I0318 09:08:52.329328 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 18 09:08:52.331885 master-0 kubenswrapper[26053]: I0318 09:08:52.329487 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 18 09:08:52.331885 master-0 kubenswrapper[26053]: I0318 09:08:52.329896 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 18 09:08:52.331885 master-0 kubenswrapper[26053]: I0318 09:08:52.331429 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 18 09:08:52.336670 master-0 kubenswrapper[26053]: I0318 09:08:52.332711 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5"] Mar 18 09:08:52.336670 master-0 kubenswrapper[26053]: I0318 09:08:52.335259 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 18 09:08:52.336670 master-0 kubenswrapper[26053]: I0318 09:08:52.336253 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 18 09:08:52.345139 master-0 kubenswrapper[26053]: I0318 09:08:52.344803 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 18 09:08:52.346238 master-0 kubenswrapper[26053]: I0318 09:08:52.346198 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 09:08:52.359639 master-0 kubenswrapper[26053]: I0318 09:08:52.355149 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg"] Mar 18 09:08:52.359639 master-0 kubenswrapper[26053]: I0318 09:08:52.359175 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 09:08:52.406551 master-0 kubenswrapper[26053]: I0318 09:08:52.406499 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"23ac9ed225074018d37bee52bada47af56c5c9bad1f0c09b6b587cb7c4396e40"} Mar 18 09:08:52.406551 master-0 kubenswrapper[26053]: I0318 09:08:52.406554 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"7a94b236ff42675004de06a30cea6a31f5f675073440f8aeca0960f8426267c5"} Mar 18 09:08:52.407120 master-0 kubenswrapper[26053]: I0318 09:08:52.406583 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8e27b7d086edf5d2cf47b703574641d8","Type":"ContainerStarted","Data":"5a48f3944b3bb227928884831dc6a6f89827f367565127f3e667962996b2dbe7"} Mar 18 09:08:52.407120 master-0 kubenswrapper[26053]: I0318 09:08:52.406851 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:08:52.411152 master-0 kubenswrapper[26053]: I0318 09:08:52.411128 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-config\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.411312 master-0 kubenswrapper[26053]: I0318 09:08:52.411293 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/db4437ea-0a1e-478b-a9fe-a06c182f83a1-federate-client-tls\") pod \"telemeter-client-55b7f8bbf6-nj5q5\" (UID: \"db4437ea-0a1e-478b-a9fe-a06c182f83a1\") " pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.411408 master-0 kubenswrapper[26053]: I0318 09:08:52.411395 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.411502 master-0 kubenswrapper[26053]: I0318 09:08:52.411489 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/db4437ea-0a1e-478b-a9fe-a06c182f83a1-secret-telemeter-client\") pod \"telemeter-client-55b7f8bbf6-nj5q5\" (UID: \"db4437ea-0a1e-478b-a9fe-a06c182f83a1\") " pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.411595 master-0 kubenswrapper[26053]: I0318 09:08:52.411582 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.411680 master-0 kubenswrapper[26053]: I0318 09:08:52.411666 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.411763 master-0 kubenswrapper[26053]: I0318 09:08:52.411749 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwvdq\" (UniqueName: \"kubernetes.io/projected/2e6ee2ab-ba60-4663-90ab-10035e03107a-kube-api-access-bwvdq\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.411837 master-0 kubenswrapper[26053]: I0318 09:08:52.411825 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/db4437ea-0a1e-478b-a9fe-a06c182f83a1-metrics-client-ca\") pod \"telemeter-client-55b7f8bbf6-nj5q5\" (UID: \"db4437ea-0a1e-478b-a9fe-a06c182f83a1\") " pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.411911 master-0 kubenswrapper[26053]: I0318 09:08:52.411900 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e6ee2ab-ba60-4663-90ab-10035e03107a-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.411986 master-0 kubenswrapper[26053]: I0318 09:08:52.411974 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-web-config\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.412084 master-0 kubenswrapper[26053]: I0318 09:08:52.412072 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2e6ee2ab-ba60-4663-90ab-10035e03107a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.412169 master-0 kubenswrapper[26053]: I0318 09:08:52.412156 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.412248 master-0 kubenswrapper[26053]: I0318 09:08:52.412233 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.412343 master-0 kubenswrapper[26053]: I0318 09:08:52.412324 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/2e6ee2ab-ba60-4663-90ab-10035e03107a-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.412440 master-0 kubenswrapper[26053]: I0318 09:08:52.412422 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdlrs\" (UniqueName: \"kubernetes.io/projected/db4437ea-0a1e-478b-a9fe-a06c182f83a1-kube-api-access-qdlrs\") pod \"telemeter-client-55b7f8bbf6-nj5q5\" (UID: \"db4437ea-0a1e-478b-a9fe-a06c182f83a1\") " pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.412550 master-0 kubenswrapper[26053]: I0318 09:08:52.412531 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2e6ee2ab-ba60-4663-90ab-10035e03107a-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.412691 master-0 kubenswrapper[26053]: I0318 09:08:52.412672 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.412813 master-0 kubenswrapper[26053]: I0318 09:08:52.412793 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e6ee2ab-ba60-4663-90ab-10035e03107a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.412920 master-0 kubenswrapper[26053]: I0318 09:08:52.412901 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2e6ee2ab-ba60-4663-90ab-10035e03107a-config-out\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.413019 master-0 kubenswrapper[26053]: I0318 09:08:52.413001 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db4437ea-0a1e-478b-a9fe-a06c182f83a1-telemeter-trusted-ca-bundle\") pod \"telemeter-client-55b7f8bbf6-nj5q5\" (UID: \"db4437ea-0a1e-478b-a9fe-a06c182f83a1\") " pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.413125 master-0 kubenswrapper[26053]: I0318 09:08:52.413109 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e6ee2ab-ba60-4663-90ab-10035e03107a-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.413229 master-0 kubenswrapper[26053]: I0318 09:08:52.413213 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e6ee2ab-ba60-4663-90ab-10035e03107a-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.413327 master-0 kubenswrapper[26053]: I0318 09:08:52.413310 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/db4437ea-0a1e-478b-a9fe-a06c182f83a1-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-55b7f8bbf6-nj5q5\" (UID: \"db4437ea-0a1e-478b-a9fe-a06c182f83a1\") " pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.413422 master-0 kubenswrapper[26053]: I0318 09:08:52.413406 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db4437ea-0a1e-478b-a9fe-a06c182f83a1-serving-certs-ca-bundle\") pod \"telemeter-client-55b7f8bbf6-nj5q5\" (UID: \"db4437ea-0a1e-478b-a9fe-a06c182f83a1\") " pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.413514 master-0 kubenswrapper[26053]: I0318 09:08:52.413498 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.413653 master-0 kubenswrapper[26053]: I0318 09:08:52.413634 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/db4437ea-0a1e-478b-a9fe-a06c182f83a1-telemeter-client-tls\") pod \"telemeter-client-55b7f8bbf6-nj5q5\" (UID: \"db4437ea-0a1e-478b-a9fe-a06c182f83a1\") " pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.415474 master-0 kubenswrapper[26053]: I0318 09:08:52.415437 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.415798 master-0 kubenswrapper[26053]: I0318 09:08:52.415752 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/db4437ea-0a1e-478b-a9fe-a06c182f83a1-secret-telemeter-client\") pod \"telemeter-client-55b7f8bbf6-nj5q5\" (UID: \"db4437ea-0a1e-478b-a9fe-a06c182f83a1\") " pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.416142 master-0 kubenswrapper[26053]: I0318 09:08:52.416099 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2e6ee2ab-ba60-4663-90ab-10035e03107a-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.416307 master-0 kubenswrapper[26053]: I0318 09:08:52.416265 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-config\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.417299 master-0 kubenswrapper[26053]: I0318 09:08:52.417269 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.418072 master-0 kubenswrapper[26053]: I0318 09:08:52.418039 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/db4437ea-0a1e-478b-a9fe-a06c182f83a1-metrics-client-ca\") pod \"telemeter-client-55b7f8bbf6-nj5q5\" (UID: \"db4437ea-0a1e-478b-a9fe-a06c182f83a1\") " pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.418451 master-0 kubenswrapper[26053]: I0318 09:08:52.418391 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e6ee2ab-ba60-4663-90ab-10035e03107a-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.421118 master-0 kubenswrapper[26053]: I0318 09:08:52.421069 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.421496 master-0 kubenswrapper[26053]: I0318 09:08:52.421446 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2e6ee2ab-ba60-4663-90ab-10035e03107a-config-out\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.422049 master-0 kubenswrapper[26053]: I0318 09:08:52.422006 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.422553 master-0 kubenswrapper[26053]: I0318 09:08:52.422519 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/db4437ea-0a1e-478b-a9fe-a06c182f83a1-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-55b7f8bbf6-nj5q5\" (UID: \"db4437ea-0a1e-478b-a9fe-a06c182f83a1\") " pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.422671 master-0 kubenswrapper[26053]: I0318 09:08:52.422618 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db4437ea-0a1e-478b-a9fe-a06c182f83a1-serving-certs-ca-bundle\") pod \"telemeter-client-55b7f8bbf6-nj5q5\" (UID: \"db4437ea-0a1e-478b-a9fe-a06c182f83a1\") " pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.423661 master-0 kubenswrapper[26053]: I0318 09:08:52.423616 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e6ee2ab-ba60-4663-90ab-10035e03107a-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.423748 master-0 kubenswrapper[26053]: I0318 09:08:52.423652 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/db4437ea-0a1e-478b-a9fe-a06c182f83a1-federate-client-tls\") pod \"telemeter-client-55b7f8bbf6-nj5q5\" (UID: \"db4437ea-0a1e-478b-a9fe-a06c182f83a1\") " pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.425367 master-0 kubenswrapper[26053]: I0318 09:08:52.425332 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e6ee2ab-ba60-4663-90ab-10035e03107a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.425448 master-0 kubenswrapper[26053]: I0318 09:08:52.425406 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/2e6ee2ab-ba60-4663-90ab-10035e03107a-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.425791 master-0 kubenswrapper[26053]: I0318 09:08:52.425759 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.425959 master-0 kubenswrapper[26053]: I0318 09:08:52.425930 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db4437ea-0a1e-478b-a9fe-a06c182f83a1-telemeter-trusted-ca-bundle\") pod \"telemeter-client-55b7f8bbf6-nj5q5\" (UID: \"db4437ea-0a1e-478b-a9fe-a06c182f83a1\") " pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.429327 master-0 kubenswrapper[26053]: I0318 09:08:52.426718 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.429327 master-0 kubenswrapper[26053]: I0318 09:08:52.428190 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-web-config\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.429327 master-0 kubenswrapper[26053]: I0318 09:08:52.428833 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=39.428821793 podStartE2EDuration="39.428821793s" podCreationTimestamp="2026-03-18 09:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:08:52.428388522 +0000 UTC m=+319.921739913" watchObservedRunningTime="2026-03-18 09:08:52.428821793 +0000 UTC m=+319.922173174" Mar 18 09:08:52.431795 master-0 kubenswrapper[26053]: I0318 09:08:52.431743 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e6ee2ab-ba60-4663-90ab-10035e03107a-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.434206 master-0 kubenswrapper[26053]: I0318 09:08:52.434174 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/2e6ee2ab-ba60-4663-90ab-10035e03107a-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.434299 master-0 kubenswrapper[26053]: I0318 09:08:52.434197 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/2e6ee2ab-ba60-4663-90ab-10035e03107a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.436735 master-0 kubenswrapper[26053]: I0318 09:08:52.436672 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/db4437ea-0a1e-478b-a9fe-a06c182f83a1-telemeter-client-tls\") pod \"telemeter-client-55b7f8bbf6-nj5q5\" (UID: \"db4437ea-0a1e-478b-a9fe-a06c182f83a1\") " pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.441681 master-0 kubenswrapper[26053]: I0318 09:08:52.441630 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwvdq\" (UniqueName: \"kubernetes.io/projected/2e6ee2ab-ba60-4663-90ab-10035e03107a-kube-api-access-bwvdq\") pod \"prometheus-k8s-0\" (UID: \"2e6ee2ab-ba60-4663-90ab-10035e03107a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.444494 master-0 kubenswrapper[26053]: I0318 09:08:52.444413 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdlrs\" (UniqueName: \"kubernetes.io/projected/db4437ea-0a1e-478b-a9fe-a06c182f83a1-kube-api-access-qdlrs\") pod \"telemeter-client-55b7f8bbf6-nj5q5\" (UID: \"db4437ea-0a1e-478b-a9fe-a06c182f83a1\") " pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.515786 master-0 kubenswrapper[26053]: I0318 09:08:52.515702 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ba3504d-c2ce-407f-b0e6-14582e17560e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.515786 master-0 kubenswrapper[26053]: I0318 09:08:52.515792 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6706b96f-9bc3-4664-9fdc-2c0693ddf787-metrics-client-ca\") pod \"thanos-querier-5bc4ddd65f-jtdvg\" (UID: \"6706b96f-9bc3-4664-9fdc-2c0693ddf787\") " pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.516057 master-0 kubenswrapper[26053]: I0318 09:08:52.515834 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1ba3504d-c2ce-407f-b0e6-14582e17560e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.516057 master-0 kubenswrapper[26053]: I0318 09:08:52.515998 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1ba3504d-c2ce-407f-b0e6-14582e17560e-web-config\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.516123 master-0 kubenswrapper[26053]: I0318 09:08:52.516080 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1ba3504d-c2ce-407f-b0e6-14582e17560e-config-out\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.516239 master-0 kubenswrapper[26053]: I0318 09:08:52.516129 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48zmw\" (UniqueName: \"kubernetes.io/projected/6706b96f-9bc3-4664-9fdc-2c0693ddf787-kube-api-access-48zmw\") pod \"thanos-querier-5bc4ddd65f-jtdvg\" (UID: \"6706b96f-9bc3-4664-9fdc-2c0693ddf787\") " pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.516239 master-0 kubenswrapper[26053]: I0318 09:08:52.516205 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/6706b96f-9bc3-4664-9fdc-2c0693ddf787-secret-thanos-querier-tls\") pod \"thanos-querier-5bc4ddd65f-jtdvg\" (UID: \"6706b96f-9bc3-4664-9fdc-2c0693ddf787\") " pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.516329 master-0 kubenswrapper[26053]: I0318 09:08:52.516242 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg2mq\" (UniqueName: \"kubernetes.io/projected/1ba3504d-c2ce-407f-b0e6-14582e17560e-kube-api-access-kg2mq\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.516373 master-0 kubenswrapper[26053]: I0318 09:08:52.516330 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/1ba3504d-c2ce-407f-b0e6-14582e17560e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.516420 master-0 kubenswrapper[26053]: I0318 09:08:52.516396 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6706b96f-9bc3-4664-9fdc-2c0693ddf787-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5bc4ddd65f-jtdvg\" (UID: \"6706b96f-9bc3-4664-9fdc-2c0693ddf787\") " pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.516468 master-0 kubenswrapper[26053]: I0318 09:08:52.516431 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1ba3504d-c2ce-407f-b0e6-14582e17560e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.516510 master-0 kubenswrapper[26053]: I0318 09:08:52.516496 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/1ba3504d-c2ce-407f-b0e6-14582e17560e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.516556 master-0 kubenswrapper[26053]: I0318 09:08:52.516528 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6706b96f-9bc3-4664-9fdc-2c0693ddf787-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5bc4ddd65f-jtdvg\" (UID: \"6706b96f-9bc3-4664-9fdc-2c0693ddf787\") " pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.516654 master-0 kubenswrapper[26053]: I0318 09:08:52.516558 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1ba3504d-c2ce-407f-b0e6-14582e17560e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.516774 master-0 kubenswrapper[26053]: I0318 09:08:52.516719 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1ba3504d-c2ce-407f-b0e6-14582e17560e-config-volume\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.516774 master-0 kubenswrapper[26053]: I0318 09:08:52.516761 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1ba3504d-c2ce-407f-b0e6-14582e17560e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.516901 master-0 kubenswrapper[26053]: I0318 09:08:52.516844 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/6706b96f-9bc3-4664-9fdc-2c0693ddf787-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5bc4ddd65f-jtdvg\" (UID: \"6706b96f-9bc3-4664-9fdc-2c0693ddf787\") " pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.516971 master-0 kubenswrapper[26053]: I0318 09:08:52.516948 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1ba3504d-c2ce-407f-b0e6-14582e17560e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.517024 master-0 kubenswrapper[26053]: I0318 09:08:52.517006 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/6706b96f-9bc3-4664-9fdc-2c0693ddf787-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5bc4ddd65f-jtdvg\" (UID: \"6706b96f-9bc3-4664-9fdc-2c0693ddf787\") " pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.517093 master-0 kubenswrapper[26053]: I0318 09:08:52.517075 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6706b96f-9bc3-4664-9fdc-2c0693ddf787-secret-grpc-tls\") pod \"thanos-querier-5bc4ddd65f-jtdvg\" (UID: \"6706b96f-9bc3-4664-9fdc-2c0693ddf787\") " pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.599316 master-0 kubenswrapper[26053]: I0318 09:08:52.599198 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:08:52.621580 master-0 kubenswrapper[26053]: I0318 09:08:52.620901 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" Mar 18 09:08:52.621580 master-0 kubenswrapper[26053]: I0318 09:08:52.621397 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1ba3504d-c2ce-407f-b0e6-14582e17560e-web-config\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.621580 master-0 kubenswrapper[26053]: I0318 09:08:52.621483 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1ba3504d-c2ce-407f-b0e6-14582e17560e-config-out\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.625369 master-0 kubenswrapper[26053]: I0318 09:08:52.623756 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48zmw\" (UniqueName: \"kubernetes.io/projected/6706b96f-9bc3-4664-9fdc-2c0693ddf787-kube-api-access-48zmw\") pod \"thanos-querier-5bc4ddd65f-jtdvg\" (UID: \"6706b96f-9bc3-4664-9fdc-2c0693ddf787\") " pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.625369 master-0 kubenswrapper[26053]: I0318 09:08:52.623832 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/6706b96f-9bc3-4664-9fdc-2c0693ddf787-secret-thanos-querier-tls\") pod \"thanos-querier-5bc4ddd65f-jtdvg\" (UID: \"6706b96f-9bc3-4664-9fdc-2c0693ddf787\") " pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.625369 master-0 kubenswrapper[26053]: I0318 09:08:52.623884 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg2mq\" (UniqueName: \"kubernetes.io/projected/1ba3504d-c2ce-407f-b0e6-14582e17560e-kube-api-access-kg2mq\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.625369 master-0 kubenswrapper[26053]: I0318 09:08:52.623922 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/1ba3504d-c2ce-407f-b0e6-14582e17560e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.625369 master-0 kubenswrapper[26053]: I0318 09:08:52.623963 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6706b96f-9bc3-4664-9fdc-2c0693ddf787-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5bc4ddd65f-jtdvg\" (UID: \"6706b96f-9bc3-4664-9fdc-2c0693ddf787\") " pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.625369 master-0 kubenswrapper[26053]: I0318 09:08:52.623997 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1ba3504d-c2ce-407f-b0e6-14582e17560e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.625369 master-0 kubenswrapper[26053]: I0318 09:08:52.624045 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/1ba3504d-c2ce-407f-b0e6-14582e17560e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.625369 master-0 kubenswrapper[26053]: I0318 09:08:52.624082 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6706b96f-9bc3-4664-9fdc-2c0693ddf787-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5bc4ddd65f-jtdvg\" (UID: \"6706b96f-9bc3-4664-9fdc-2c0693ddf787\") " pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.625369 master-0 kubenswrapper[26053]: I0318 09:08:52.624119 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1ba3504d-c2ce-407f-b0e6-14582e17560e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.625369 master-0 kubenswrapper[26053]: I0318 09:08:52.624184 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1ba3504d-c2ce-407f-b0e6-14582e17560e-config-volume\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.625369 master-0 kubenswrapper[26053]: I0318 09:08:52.624222 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1ba3504d-c2ce-407f-b0e6-14582e17560e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.625369 master-0 kubenswrapper[26053]: I0318 09:08:52.624279 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/6706b96f-9bc3-4664-9fdc-2c0693ddf787-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5bc4ddd65f-jtdvg\" (UID: \"6706b96f-9bc3-4664-9fdc-2c0693ddf787\") " pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.625369 master-0 kubenswrapper[26053]: I0318 09:08:52.624334 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1ba3504d-c2ce-407f-b0e6-14582e17560e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.625369 master-0 kubenswrapper[26053]: I0318 09:08:52.624390 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/6706b96f-9bc3-4664-9fdc-2c0693ddf787-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5bc4ddd65f-jtdvg\" (UID: \"6706b96f-9bc3-4664-9fdc-2c0693ddf787\") " pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.625369 master-0 kubenswrapper[26053]: I0318 09:08:52.624460 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6706b96f-9bc3-4664-9fdc-2c0693ddf787-secret-grpc-tls\") pod \"thanos-querier-5bc4ddd65f-jtdvg\" (UID: \"6706b96f-9bc3-4664-9fdc-2c0693ddf787\") " pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.625369 master-0 kubenswrapper[26053]: I0318 09:08:52.624520 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ba3504d-c2ce-407f-b0e6-14582e17560e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.625369 master-0 kubenswrapper[26053]: I0318 09:08:52.624593 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6706b96f-9bc3-4664-9fdc-2c0693ddf787-metrics-client-ca\") pod \"thanos-querier-5bc4ddd65f-jtdvg\" (UID: \"6706b96f-9bc3-4664-9fdc-2c0693ddf787\") " pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.625369 master-0 kubenswrapper[26053]: I0318 09:08:52.624629 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1ba3504d-c2ce-407f-b0e6-14582e17560e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.625369 master-0 kubenswrapper[26053]: I0318 09:08:52.624702 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/1ba3504d-c2ce-407f-b0e6-14582e17560e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.626794 master-0 kubenswrapper[26053]: I0318 09:08:52.626261 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1ba3504d-c2ce-407f-b0e6-14582e17560e-config-out\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.628800 master-0 kubenswrapper[26053]: I0318 09:08:52.628551 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1ba3504d-c2ce-407f-b0e6-14582e17560e-web-config\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.631629 master-0 kubenswrapper[26053]: I0318 09:08:52.629858 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1ba3504d-c2ce-407f-b0e6-14582e17560e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.631629 master-0 kubenswrapper[26053]: I0318 09:08:52.630596 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/6706b96f-9bc3-4664-9fdc-2c0693ddf787-secret-thanos-querier-tls\") pod \"thanos-querier-5bc4ddd65f-jtdvg\" (UID: \"6706b96f-9bc3-4664-9fdc-2c0693ddf787\") " pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.632098 master-0 kubenswrapper[26053]: I0318 09:08:52.631828 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1ba3504d-c2ce-407f-b0e6-14582e17560e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.633974 master-0 kubenswrapper[26053]: I0318 09:08:52.633383 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/1ba3504d-c2ce-407f-b0e6-14582e17560e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.633974 master-0 kubenswrapper[26053]: I0318 09:08:52.633669 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6706b96f-9bc3-4664-9fdc-2c0693ddf787-metrics-client-ca\") pod \"thanos-querier-5bc4ddd65f-jtdvg\" (UID: \"6706b96f-9bc3-4664-9fdc-2c0693ddf787\") " pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.633974 master-0 kubenswrapper[26053]: I0318 09:08:52.633685 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6706b96f-9bc3-4664-9fdc-2c0693ddf787-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5bc4ddd65f-jtdvg\" (UID: \"6706b96f-9bc3-4664-9fdc-2c0693ddf787\") " pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.636328 master-0 kubenswrapper[26053]: I0318 09:08:52.635694 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6706b96f-9bc3-4664-9fdc-2c0693ddf787-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5bc4ddd65f-jtdvg\" (UID: \"6706b96f-9bc3-4664-9fdc-2c0693ddf787\") " pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.637040 master-0 kubenswrapper[26053]: I0318 09:08:52.636971 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6706b96f-9bc3-4664-9fdc-2c0693ddf787-secret-grpc-tls\") pod \"thanos-querier-5bc4ddd65f-jtdvg\" (UID: \"6706b96f-9bc3-4664-9fdc-2c0693ddf787\") " pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.637497 master-0 kubenswrapper[26053]: I0318 09:08:52.637420 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1ba3504d-c2ce-407f-b0e6-14582e17560e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.637829 master-0 kubenswrapper[26053]: I0318 09:08:52.637330 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ba3504d-c2ce-407f-b0e6-14582e17560e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.639793 master-0 kubenswrapper[26053]: I0318 09:08:52.639158 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1ba3504d-c2ce-407f-b0e6-14582e17560e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.641628 master-0 kubenswrapper[26053]: I0318 09:08:52.640030 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/6706b96f-9bc3-4664-9fdc-2c0693ddf787-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5bc4ddd65f-jtdvg\" (UID: \"6706b96f-9bc3-4664-9fdc-2c0693ddf787\") " pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.641628 master-0 kubenswrapper[26053]: I0318 09:08:52.640528 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1ba3504d-c2ce-407f-b0e6-14582e17560e-config-volume\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.641834 master-0 kubenswrapper[26053]: I0318 09:08:52.641803 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg2mq\" (UniqueName: \"kubernetes.io/projected/1ba3504d-c2ce-407f-b0e6-14582e17560e-kube-api-access-kg2mq\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.643073 master-0 kubenswrapper[26053]: I0318 09:08:52.642614 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/6706b96f-9bc3-4664-9fdc-2c0693ddf787-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5bc4ddd65f-jtdvg\" (UID: \"6706b96f-9bc3-4664-9fdc-2c0693ddf787\") " pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.650314 master-0 kubenswrapper[26053]: I0318 09:08:52.650265 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1ba3504d-c2ce-407f-b0e6-14582e17560e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"1ba3504d-c2ce-407f-b0e6-14582e17560e\") " pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.650748 master-0 kubenswrapper[26053]: I0318 09:08:52.650705 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 18 09:08:52.659311 master-0 kubenswrapper[26053]: I0318 09:08:52.659253 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48zmw\" (UniqueName: \"kubernetes.io/projected/6706b96f-9bc3-4664-9fdc-2c0693ddf787-kube-api-access-48zmw\") pod \"thanos-querier-5bc4ddd65f-jtdvg\" (UID: \"6706b96f-9bc3-4664-9fdc-2c0693ddf787\") " pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:52.928516 master-0 kubenswrapper[26053]: I0318 09:08:52.928430 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:08:53.099886 master-0 kubenswrapper[26053]: I0318 09:08:53.099827 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5"] Mar 18 09:08:53.100319 master-0 kubenswrapper[26053]: W0318 09:08:53.100237 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb4437ea_0a1e_478b_a9fe_a06c182f83a1.slice/crio-cc4cafd8a9af129ef9d4cf4f0d9b3c355a25a58d2d2721a7c27165b1e77e7cb4 WatchSource:0}: Error finding container cc4cafd8a9af129ef9d4cf4f0d9b3c355a25a58d2d2721a7c27165b1e77e7cb4: Status 404 returned error can't find the container with id cc4cafd8a9af129ef9d4cf4f0d9b3c355a25a58d2d2721a7c27165b1e77e7cb4 Mar 18 09:08:53.178017 master-0 kubenswrapper[26053]: I0318 09:08:53.177939 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 18 09:08:53.253826 master-0 kubenswrapper[26053]: I0318 09:08:53.253348 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 18 09:08:53.260391 master-0 kubenswrapper[26053]: W0318 09:08:53.260288 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ba3504d_c2ce_407f_b0e6_14582e17560e.slice/crio-010040226c3c7424c81c4887510f566503508c8f28bc546b0d5d0d725c9375af WatchSource:0}: Error finding container 010040226c3c7424c81c4887510f566503508c8f28bc546b0d5d0d725c9375af: Status 404 returned error can't find the container with id 010040226c3c7424c81c4887510f566503508c8f28bc546b0d5d0d725c9375af Mar 18 09:08:53.355446 master-0 kubenswrapper[26053]: I0318 09:08:53.355380 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg"] Mar 18 09:08:53.359260 master-0 kubenswrapper[26053]: W0318 09:08:53.359217 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6706b96f_9bc3_4664_9fdc_2c0693ddf787.slice/crio-2f56e16c449b3477e4b3930eefcbb4b7a07e92243a5aa4a00766109db957ef33 WatchSource:0}: Error finding container 2f56e16c449b3477e4b3930eefcbb4b7a07e92243a5aa4a00766109db957ef33: Status 404 returned error can't find the container with id 2f56e16c449b3477e4b3930eefcbb4b7a07e92243a5aa4a00766109db957ef33 Mar 18 09:08:53.414908 master-0 kubenswrapper[26053]: I0318 09:08:53.414832 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1ba3504d-c2ce-407f-b0e6-14582e17560e","Type":"ContainerStarted","Data":"010040226c3c7424c81c4887510f566503508c8f28bc546b0d5d0d725c9375af"} Mar 18 09:08:53.415837 master-0 kubenswrapper[26053]: I0318 09:08:53.415778 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" event={"ID":"6706b96f-9bc3-4664-9fdc-2c0693ddf787","Type":"ContainerStarted","Data":"2f56e16c449b3477e4b3930eefcbb4b7a07e92243a5aa4a00766109db957ef33"} Mar 18 09:08:53.416991 master-0 kubenswrapper[26053]: I0318 09:08:53.416951 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" event={"ID":"db4437ea-0a1e-478b-a9fe-a06c182f83a1","Type":"ContainerStarted","Data":"cc4cafd8a9af129ef9d4cf4f0d9b3c355a25a58d2d2721a7c27165b1e77e7cb4"} Mar 18 09:08:53.418032 master-0 kubenswrapper[26053]: I0318 09:08:53.417987 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2e6ee2ab-ba60-4663-90ab-10035e03107a","Type":"ContainerStarted","Data":"9d82856cc4a526eb2c8187e964c656143d11d351f363137f6a4c9a54e26527e2"} Mar 18 09:08:55.434927 master-0 kubenswrapper[26053]: I0318 09:08:55.434866 26053 generic.go:334] "Generic (PLEG): container finished" podID="2e6ee2ab-ba60-4663-90ab-10035e03107a" containerID="5b528400661aef0e26dd2e1364fa405261feaa157fff5c059cdbcb2826199d75" exitCode=0 Mar 18 09:08:55.434927 master-0 kubenswrapper[26053]: I0318 09:08:55.434910 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2e6ee2ab-ba60-4663-90ab-10035e03107a","Type":"ContainerDied","Data":"5b528400661aef0e26dd2e1364fa405261feaa157fff5c059cdbcb2826199d75"} Mar 18 09:08:55.437739 master-0 kubenswrapper[26053]: I0318 09:08:55.437682 26053 generic.go:334] "Generic (PLEG): container finished" podID="1ba3504d-c2ce-407f-b0e6-14582e17560e" containerID="5594eea5f872c5aa0f012e57cbc846e77c5da98fc92569cf2fb0606cec4ea4b3" exitCode=0 Mar 18 09:08:55.437797 master-0 kubenswrapper[26053]: I0318 09:08:55.437748 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1ba3504d-c2ce-407f-b0e6-14582e17560e","Type":"ContainerDied","Data":"5594eea5f872c5aa0f012e57cbc846e77c5da98fc92569cf2fb0606cec4ea4b3"} Mar 18 09:08:55.438901 master-0 kubenswrapper[26053]: I0318 09:08:55.438868 26053 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 18 09:08:58.141946 master-0 kubenswrapper[26053]: I0318 09:08:58.141735 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" event={"ID":"6706b96f-9bc3-4664-9fdc-2c0693ddf787","Type":"ContainerStarted","Data":"f59e47ffb8f2006683fddb71f35023d41a597702e5c45ac4fc8ce61bafe93360"} Mar 18 09:08:58.141946 master-0 kubenswrapper[26053]: I0318 09:08:58.141903 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" event={"ID":"6706b96f-9bc3-4664-9fdc-2c0693ddf787","Type":"ContainerStarted","Data":"f7bbfc7ea6133337fbe2072ed6d71bd4cd6501437dbbcb437e14ea1d6438e770"} Mar 18 09:08:58.141946 master-0 kubenswrapper[26053]: I0318 09:08:58.141923 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" event={"ID":"6706b96f-9bc3-4664-9fdc-2c0693ddf787","Type":"ContainerStarted","Data":"88118a49de1c521770a6aabbb62d787d1d778e8f1ed48647235b10381155e771"} Mar 18 09:08:58.145712 master-0 kubenswrapper[26053]: I0318 09:08:58.145675 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" event={"ID":"db4437ea-0a1e-478b-a9fe-a06c182f83a1","Type":"ContainerStarted","Data":"7a19306a336c038ace8ba38e33a0ca3c8135bf3bd4a8de6d9ad92abeaa9b2eb8"} Mar 18 09:08:58.145790 master-0 kubenswrapper[26053]: I0318 09:08:58.145717 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" event={"ID":"db4437ea-0a1e-478b-a9fe-a06c182f83a1","Type":"ContainerStarted","Data":"8398c7bc87ca50c035a25011adef7654920343d75488aef03c0d6c81a8c651fe"} Mar 18 09:08:58.145790 master-0 kubenswrapper[26053]: I0318 09:08:58.145764 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" event={"ID":"db4437ea-0a1e-478b-a9fe-a06c182f83a1","Type":"ContainerStarted","Data":"684bf9938370752af353785e1cbb1e48ba7133ebaefff06ff0c4f3abff8d3dfb"} Mar 18 09:08:58.178532 master-0 kubenswrapper[26053]: I0318 09:08:58.178448 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-55b7f8bbf6-nj5q5" podStartSLOduration=16.298991607 podStartE2EDuration="19.17842085s" podCreationTimestamp="2026-03-18 09:08:39 +0000 UTC" firstStartedPulling="2026-03-18 09:08:53.105353967 +0000 UTC m=+320.598705358" lastFinishedPulling="2026-03-18 09:08:55.98478322 +0000 UTC m=+323.478134601" observedRunningTime="2026-03-18 09:08:58.171874067 +0000 UTC m=+325.665225458" watchObservedRunningTime="2026-03-18 09:08:58.17842085 +0000 UTC m=+325.671772231" Mar 18 09:08:59.590356 master-0 kubenswrapper[26053]: I0318 09:08:59.590294 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-64b4885569-gmdjt"] Mar 18 09:08:59.591086 master-0 kubenswrapper[26053]: I0318 09:08:59.591059 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64b4885569-gmdjt"] Mar 18 09:08:59.591160 master-0 kubenswrapper[26053]: I0318 09:08:59.591139 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:08:59.664555 master-0 kubenswrapper[26053]: I0318 09:08:59.664492 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/50e64936-f20b-4d5a-99ec-3264186272a3-console-serving-cert\") pod \"console-64b4885569-gmdjt\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:08:59.664746 master-0 kubenswrapper[26053]: I0318 09:08:59.664633 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/50e64936-f20b-4d5a-99ec-3264186272a3-console-oauth-config\") pod \"console-64b4885569-gmdjt\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:08:59.664746 master-0 kubenswrapper[26053]: I0318 09:08:59.664694 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmksc\" (UniqueName: \"kubernetes.io/projected/50e64936-f20b-4d5a-99ec-3264186272a3-kube-api-access-bmksc\") pod \"console-64b4885569-gmdjt\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:08:59.664746 master-0 kubenswrapper[26053]: I0318 09:08:59.664728 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/50e64936-f20b-4d5a-99ec-3264186272a3-console-config\") pod \"console-64b4885569-gmdjt\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:08:59.664941 master-0 kubenswrapper[26053]: I0318 09:08:59.664856 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50e64936-f20b-4d5a-99ec-3264186272a3-trusted-ca-bundle\") pod \"console-64b4885569-gmdjt\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:08:59.665047 master-0 kubenswrapper[26053]: I0318 09:08:59.665016 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/50e64936-f20b-4d5a-99ec-3264186272a3-service-ca\") pod \"console-64b4885569-gmdjt\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:08:59.665175 master-0 kubenswrapper[26053]: I0318 09:08:59.665145 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/50e64936-f20b-4d5a-99ec-3264186272a3-oauth-serving-cert\") pod \"console-64b4885569-gmdjt\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:08:59.766852 master-0 kubenswrapper[26053]: I0318 09:08:59.766765 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50e64936-f20b-4d5a-99ec-3264186272a3-trusted-ca-bundle\") pod \"console-64b4885569-gmdjt\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:08:59.766852 master-0 kubenswrapper[26053]: I0318 09:08:59.766857 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/50e64936-f20b-4d5a-99ec-3264186272a3-service-ca\") pod \"console-64b4885569-gmdjt\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:08:59.767129 master-0 kubenswrapper[26053]: I0318 09:08:59.766898 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/50e64936-f20b-4d5a-99ec-3264186272a3-oauth-serving-cert\") pod \"console-64b4885569-gmdjt\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:08:59.767129 master-0 kubenswrapper[26053]: I0318 09:08:59.766960 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/50e64936-f20b-4d5a-99ec-3264186272a3-console-serving-cert\") pod \"console-64b4885569-gmdjt\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:08:59.767129 master-0 kubenswrapper[26053]: I0318 09:08:59.767006 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/50e64936-f20b-4d5a-99ec-3264186272a3-console-oauth-config\") pod \"console-64b4885569-gmdjt\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:08:59.767129 master-0 kubenswrapper[26053]: I0318 09:08:59.767026 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmksc\" (UniqueName: \"kubernetes.io/projected/50e64936-f20b-4d5a-99ec-3264186272a3-kube-api-access-bmksc\") pod \"console-64b4885569-gmdjt\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:08:59.767129 master-0 kubenswrapper[26053]: I0318 09:08:59.767048 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/50e64936-f20b-4d5a-99ec-3264186272a3-console-config\") pod \"console-64b4885569-gmdjt\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:08:59.769926 master-0 kubenswrapper[26053]: I0318 09:08:59.767978 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/50e64936-f20b-4d5a-99ec-3264186272a3-console-config\") pod \"console-64b4885569-gmdjt\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:08:59.769926 master-0 kubenswrapper[26053]: I0318 09:08:59.768863 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50e64936-f20b-4d5a-99ec-3264186272a3-trusted-ca-bundle\") pod \"console-64b4885569-gmdjt\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:08:59.769926 master-0 kubenswrapper[26053]: I0318 09:08:59.769875 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/50e64936-f20b-4d5a-99ec-3264186272a3-service-ca\") pod \"console-64b4885569-gmdjt\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:08:59.770138 master-0 kubenswrapper[26053]: I0318 09:08:59.769982 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/50e64936-f20b-4d5a-99ec-3264186272a3-oauth-serving-cert\") pod \"console-64b4885569-gmdjt\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:08:59.771728 master-0 kubenswrapper[26053]: I0318 09:08:59.771691 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/50e64936-f20b-4d5a-99ec-3264186272a3-console-oauth-config\") pod \"console-64b4885569-gmdjt\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:08:59.773432 master-0 kubenswrapper[26053]: I0318 09:08:59.773356 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/50e64936-f20b-4d5a-99ec-3264186272a3-console-serving-cert\") pod \"console-64b4885569-gmdjt\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:08:59.785756 master-0 kubenswrapper[26053]: I0318 09:08:59.785546 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmksc\" (UniqueName: \"kubernetes.io/projected/50e64936-f20b-4d5a-99ec-3264186272a3-kube-api-access-bmksc\") pod \"console-64b4885569-gmdjt\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:09:00.021917 master-0 kubenswrapper[26053]: I0318 09:09:00.021846 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:09:00.162011 master-0 kubenswrapper[26053]: I0318 09:09:00.161901 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" event={"ID":"6706b96f-9bc3-4664-9fdc-2c0693ddf787","Type":"ContainerStarted","Data":"b2e3de87a32387edd4926d3df43a211d2c709877a461ce30592fb97c38b005dd"} Mar 18 09:09:00.172199 master-0 kubenswrapper[26053]: I0318 09:09:00.172094 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1ba3504d-c2ce-407f-b0e6-14582e17560e","Type":"ContainerStarted","Data":"82ca02a6ddeeafc5d91de5770de4dac4e92fb0a681ba1bc270a5a16aceb16360"} Mar 18 09:09:01.423839 master-0 kubenswrapper[26053]: I0318 09:09:01.423798 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64b4885569-gmdjt"] Mar 18 09:09:01.425351 master-0 kubenswrapper[26053]: W0318 09:09:01.425280 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50e64936_f20b_4d5a_99ec_3264186272a3.slice/crio-6ce514659ffc94260379b15fb6afda36dd77368c6747d6cfefea08c466ded85d WatchSource:0}: Error finding container 6ce514659ffc94260379b15fb6afda36dd77368c6747d6cfefea08c466ded85d: Status 404 returned error can't find the container with id 6ce514659ffc94260379b15fb6afda36dd77368c6747d6cfefea08c466ded85d Mar 18 09:09:02.190876 master-0 kubenswrapper[26053]: I0318 09:09:02.190661 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2e6ee2ab-ba60-4663-90ab-10035e03107a","Type":"ContainerStarted","Data":"90fd119801a78d82a4410697e2e3a899d66b80b997ab7d5c03f38c9085b0db80"} Mar 18 09:09:02.190876 master-0 kubenswrapper[26053]: I0318 09:09:02.190791 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2e6ee2ab-ba60-4663-90ab-10035e03107a","Type":"ContainerStarted","Data":"a437597379fc63182e80addd139cfd8094273a63a90d8a0d084fd4c9fa72ef84"} Mar 18 09:09:02.190876 master-0 kubenswrapper[26053]: I0318 09:09:02.190825 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2e6ee2ab-ba60-4663-90ab-10035e03107a","Type":"ContainerStarted","Data":"3299fc83567acbb8971ab78513b20bb4c9b507010d3e28e29a630eb91b8df4cb"} Mar 18 09:09:02.190876 master-0 kubenswrapper[26053]: I0318 09:09:02.190859 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2e6ee2ab-ba60-4663-90ab-10035e03107a","Type":"ContainerStarted","Data":"c64e3de149d2cc248435b4898cd418e4f8d5589ab8ea4dda81d8cedbea91aff0"} Mar 18 09:09:02.191242 master-0 kubenswrapper[26053]: I0318 09:09:02.190889 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2e6ee2ab-ba60-4663-90ab-10035e03107a","Type":"ContainerStarted","Data":"2a8f61ac7de12c32fe1c2245e99907da3cfd703cf64eb5524a736f6771be9ddf"} Mar 18 09:09:02.191242 master-0 kubenswrapper[26053]: I0318 09:09:02.190931 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"2e6ee2ab-ba60-4663-90ab-10035e03107a","Type":"ContainerStarted","Data":"8e7e133dae4506fe50a62ef1e5a008f4ab2fb568225c9db94d87b07d05db4ffb"} Mar 18 09:09:02.195771 master-0 kubenswrapper[26053]: I0318 09:09:02.195728 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1ba3504d-c2ce-407f-b0e6-14582e17560e","Type":"ContainerStarted","Data":"53f6e4757f8b2157ebc8a494b44bb2b781149563a0e25017a9dea05cb93a8098"} Mar 18 09:09:02.195964 master-0 kubenswrapper[26053]: I0318 09:09:02.195937 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1ba3504d-c2ce-407f-b0e6-14582e17560e","Type":"ContainerStarted","Data":"80ec8ae110c4084235ab41eaaaac78f88a9cf92031f3e2dc4f006cf63acaa8db"} Mar 18 09:09:02.196108 master-0 kubenswrapper[26053]: I0318 09:09:02.196083 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1ba3504d-c2ce-407f-b0e6-14582e17560e","Type":"ContainerStarted","Data":"397bc034d84533f79bcbdde936a9a7ac6c4799b9cddc428add5307c81085227b"} Mar 18 09:09:02.196300 master-0 kubenswrapper[26053]: I0318 09:09:02.196228 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1ba3504d-c2ce-407f-b0e6-14582e17560e","Type":"ContainerStarted","Data":"91358d53b6b7c74e5e844da62e19e3023a80f999c5fa91621231b50ce55d9af8"} Mar 18 09:09:02.196947 master-0 kubenswrapper[26053]: I0318 09:09:02.196434 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1ba3504d-c2ce-407f-b0e6-14582e17560e","Type":"ContainerStarted","Data":"c35c9a4035b6c638876de348ed5c4919d25289cdfc51b9df6fa615fa02d76974"} Mar 18 09:09:02.200199 master-0 kubenswrapper[26053]: I0318 09:09:02.200121 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" event={"ID":"6706b96f-9bc3-4664-9fdc-2c0693ddf787","Type":"ContainerStarted","Data":"33011fff66a74c049d8570a8e96cb74f98006476b7d0742d9dcf8b812e9926b0"} Mar 18 09:09:02.200284 master-0 kubenswrapper[26053]: I0318 09:09:02.200201 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" event={"ID":"6706b96f-9bc3-4664-9fdc-2c0693ddf787","Type":"ContainerStarted","Data":"2dc662f43c5468e1c94e5b14a0b4ffdc455ba87c48c24f41a00bab9b4520532c"} Mar 18 09:09:02.200527 master-0 kubenswrapper[26053]: I0318 09:09:02.200493 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:09:02.202089 master-0 kubenswrapper[26053]: I0318 09:09:02.202056 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64b4885569-gmdjt" event={"ID":"50e64936-f20b-4d5a-99ec-3264186272a3","Type":"ContainerStarted","Data":"b0df16b442ff3f862a230f5e9f016abf9d900d19f60e12be3cb367e5df563d8f"} Mar 18 09:09:02.202267 master-0 kubenswrapper[26053]: I0318 09:09:02.202238 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64b4885569-gmdjt" event={"ID":"50e64936-f20b-4d5a-99ec-3264186272a3","Type":"ContainerStarted","Data":"6ce514659ffc94260379b15fb6afda36dd77368c6747d6cfefea08c466ded85d"} Mar 18 09:09:02.210112 master-0 kubenswrapper[26053]: I0318 09:09:02.210063 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" Mar 18 09:09:02.222668 master-0 kubenswrapper[26053]: I0318 09:09:02.222586 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=13.342727806 podStartE2EDuration="21.222554576s" podCreationTimestamp="2026-03-18 09:08:41 +0000 UTC" firstStartedPulling="2026-03-18 09:08:53.175543022 +0000 UTC m=+320.668894403" lastFinishedPulling="2026-03-18 09:09:01.055369792 +0000 UTC m=+328.548721173" observedRunningTime="2026-03-18 09:09:02.222010122 +0000 UTC m=+329.715361523" watchObservedRunningTime="2026-03-18 09:09:02.222554576 +0000 UTC m=+329.715905957" Mar 18 09:09:02.245539 master-0 kubenswrapper[26053]: I0318 09:09:02.245153 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64b4885569-gmdjt" podStartSLOduration=3.245129917 podStartE2EDuration="3.245129917s" podCreationTimestamp="2026-03-18 09:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:09:02.242108542 +0000 UTC m=+329.735459943" watchObservedRunningTime="2026-03-18 09:09:02.245129917 +0000 UTC m=+329.738481298" Mar 18 09:09:02.291170 master-0 kubenswrapper[26053]: I0318 09:09:02.291050 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=20.015899335 podStartE2EDuration="26.291018789s" podCreationTimestamp="2026-03-18 09:08:36 +0000 UTC" firstStartedPulling="2026-03-18 09:08:53.268540665 +0000 UTC m=+320.761892036" lastFinishedPulling="2026-03-18 09:08:59.543660109 +0000 UTC m=+327.037011490" observedRunningTime="2026-03-18 09:09:02.276249861 +0000 UTC m=+329.769601262" watchObservedRunningTime="2026-03-18 09:09:02.291018789 +0000 UTC m=+329.784370210" Mar 18 09:09:02.342607 master-0 kubenswrapper[26053]: I0318 09:09:02.342508 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-5bc4ddd65f-jtdvg" podStartSLOduration=20.153027393 podStartE2EDuration="26.342483768s" podCreationTimestamp="2026-03-18 09:08:36 +0000 UTC" firstStartedPulling="2026-03-18 09:08:53.361372233 +0000 UTC m=+320.854723614" lastFinishedPulling="2026-03-18 09:08:59.550828608 +0000 UTC m=+327.044179989" observedRunningTime="2026-03-18 09:09:02.336697094 +0000 UTC m=+329.830048515" watchObservedRunningTime="2026-03-18 09:09:02.342483768 +0000 UTC m=+329.835835149" Mar 18 09:09:02.599991 master-0 kubenswrapper[26053]: I0318 09:09:02.599815 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:09:04.066900 master-0 kubenswrapper[26053]: I0318 09:09:04.066835 26053 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 09:09:04.070695 master-0 kubenswrapper[26053]: I0318 09:09:04.070613 26053 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 09:09:04.070881 master-0 kubenswrapper[26053]: I0318 09:09:04.070738 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:09:04.071348 master-0 kubenswrapper[26053]: I0318 09:09:04.071253 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver" containerID="cri-o://bbf23a50550d9252ce1ebd70cefec9cc9a541dd37fc9b4ca4e7be5f197c8b9a4" gracePeriod=15 Mar 18 09:09:04.071348 master-0 kubenswrapper[26053]: I0318 09:09:04.071323 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://2bffac0d3737efa9fd4e382b7d3bdc3841a1ab5fa52c8358b5630ed2ae18bb38" gracePeriod=15 Mar 18 09:09:04.071841 master-0 kubenswrapper[26053]: I0318 09:09:04.071313 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" containerID="cri-o://dcee15c91521d767d4ea40bb222a0577ed62eb70a724af613b28117055e47b7a" gracePeriod=15 Mar 18 09:09:04.072010 master-0 kubenswrapper[26053]: I0318 09:09:04.071380 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d86be09cf0053c9e4298b054fc85c000e30ef5a3cb4ffb7f6bdab8fd73fc3e18" gracePeriod=15 Mar 18 09:09:04.072010 master-0 kubenswrapper[26053]: I0318 09:09:04.071287 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b27fd72d2df6aa8485ccfdccefee7e08421525c7ce41e315130d4dbf81b08424" gracePeriod=15 Mar 18 09:09:04.119433 master-0 kubenswrapper[26053]: E0318 09:09:04.117413 26053 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-master-0.189de4637519d20b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:7d5ce05b3d592e63f1f92202d52b9635,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Killing,Message:Stopping container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:09:04.071365131 +0000 UTC m=+331.564716512,LastTimestamp:2026-03-18 09:09:04.071365131 +0000 UTC m=+331.564716512,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:09:04.122801 master-0 kubenswrapper[26053]: I0318 09:09:04.122734 26053 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 09:09:04.123420 master-0 kubenswrapper[26053]: E0318 09:09:04.123338 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver" Mar 18 09:09:04.123420 master-0 kubenswrapper[26053]: I0318 09:09:04.123382 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver" Mar 18 09:09:04.123420 master-0 kubenswrapper[26053]: E0318 09:09:04.123410 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="setup" Mar 18 09:09:04.123746 master-0 kubenswrapper[26053]: I0318 09:09:04.123427 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="setup" Mar 18 09:09:04.123746 master-0 kubenswrapper[26053]: E0318 09:09:04.123456 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-syncer" Mar 18 09:09:04.123746 master-0 kubenswrapper[26053]: I0318 09:09:04.123475 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-syncer" Mar 18 09:09:04.123746 master-0 kubenswrapper[26053]: E0318 09:09:04.123505 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-insecure-readyz" Mar 18 09:09:04.123746 master-0 kubenswrapper[26053]: I0318 09:09:04.123522 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-insecure-readyz" Mar 18 09:09:04.123746 master-0 kubenswrapper[26053]: E0318 09:09:04.123559 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" Mar 18 09:09:04.123746 master-0 kubenswrapper[26053]: I0318 09:09:04.123684 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" Mar 18 09:09:04.123746 master-0 kubenswrapper[26053]: E0318 09:09:04.123726 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 09:09:04.123746 master-0 kubenswrapper[26053]: I0318 09:09:04.123744 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 09:09:04.124430 master-0 kubenswrapper[26053]: I0318 09:09:04.124078 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-regeneration-controller" Mar 18 09:09:04.124430 master-0 kubenswrapper[26053]: I0318 09:09:04.124160 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-insecure-readyz" Mar 18 09:09:04.124430 master-0 kubenswrapper[26053]: I0318 09:09:04.124194 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver" Mar 18 09:09:04.124430 master-0 kubenswrapper[26053]: I0318 09:09:04.124227 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-cert-syncer" Mar 18 09:09:04.124430 master-0 kubenswrapper[26053]: I0318 09:09:04.124256 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5ce05b3d592e63f1f92202d52b9635" containerName="kube-apiserver-check-endpoints" Mar 18 09:09:04.138033 master-0 kubenswrapper[26053]: I0318 09:09:04.137972 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:09:04.138467 master-0 kubenswrapper[26053]: I0318 09:09:04.138100 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:09:04.138467 master-0 kubenswrapper[26053]: I0318 09:09:04.138157 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:09:04.138467 master-0 kubenswrapper[26053]: I0318 09:09:04.138323 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:09:04.138467 master-0 kubenswrapper[26053]: I0318 09:09:04.138424 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:09:04.140108 master-0 kubenswrapper[26053]: I0318 09:09:04.138473 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:09:04.140108 master-0 kubenswrapper[26053]: I0318 09:09:04.138506 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:09:04.140108 master-0 kubenswrapper[26053]: I0318 09:09:04.138588 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:09:04.174037 master-0 kubenswrapper[26053]: E0318 09:09:04.173982 26053 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:09:04.219149 master-0 kubenswrapper[26053]: I0318 09:09:04.219114 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_7d5ce05b3d592e63f1f92202d52b9635/kube-apiserver-cert-syncer/0.log" Mar 18 09:09:04.219814 master-0 kubenswrapper[26053]: I0318 09:09:04.219785 26053 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="dcee15c91521d767d4ea40bb222a0577ed62eb70a724af613b28117055e47b7a" exitCode=0 Mar 18 09:09:04.219814 master-0 kubenswrapper[26053]: I0318 09:09:04.219808 26053 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="b27fd72d2df6aa8485ccfdccefee7e08421525c7ce41e315130d4dbf81b08424" exitCode=0 Mar 18 09:09:04.219934 master-0 kubenswrapper[26053]: I0318 09:09:04.219819 26053 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="2bffac0d3737efa9fd4e382b7d3bdc3841a1ab5fa52c8358b5630ed2ae18bb38" exitCode=0 Mar 18 09:09:04.219934 master-0 kubenswrapper[26053]: I0318 09:09:04.219827 26053 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="d86be09cf0053c9e4298b054fc85c000e30ef5a3cb4ffb7f6bdab8fd73fc3e18" exitCode=2 Mar 18 09:09:04.240657 master-0 kubenswrapper[26053]: I0318 09:09:04.239766 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:09:04.240657 master-0 kubenswrapper[26053]: I0318 09:09:04.239830 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:09:04.240657 master-0 kubenswrapper[26053]: I0318 09:09:04.240014 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:09:04.240657 master-0 kubenswrapper[26053]: I0318 09:09:04.240063 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:09:04.240657 master-0 kubenswrapper[26053]: I0318 09:09:04.239922 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:09:04.240657 master-0 kubenswrapper[26053]: I0318 09:09:04.240247 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:09:04.240657 master-0 kubenswrapper[26053]: I0318 09:09:04.240370 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:09:04.240657 master-0 kubenswrapper[26053]: I0318 09:09:04.240675 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:09:04.240948 master-0 kubenswrapper[26053]: I0318 09:09:04.240741 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:09:04.240948 master-0 kubenswrapper[26053]: I0318 09:09:04.240767 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:09:04.240948 master-0 kubenswrapper[26053]: I0318 09:09:04.240852 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:09:04.241077 master-0 kubenswrapper[26053]: I0318 09:09:04.241049 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:09:04.241113 master-0 kubenswrapper[26053]: I0318 09:09:04.241087 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:09:04.241446 master-0 kubenswrapper[26053]: I0318 09:09:04.241376 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:09:04.241783 master-0 kubenswrapper[26053]: I0318 09:09:04.241745 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:09:04.242527 master-0 kubenswrapper[26053]: I0318 09:09:04.242482 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/274c4bebf95a655851b2cf276fe43ef7-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"274c4bebf95a655851b2cf276fe43ef7\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:09:04.475643 master-0 kubenswrapper[26053]: I0318 09:09:04.475555 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:09:04.500178 master-0 kubenswrapper[26053]: W0318 09:09:04.500068 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebbfbf2b56df0323ba118d68bfdad8b9.slice/crio-930a0912a671e77d5b8a7f8c4db04302bff3bb6be2a6d478e146dad8132edad3 WatchSource:0}: Error finding container 930a0912a671e77d5b8a7f8c4db04302bff3bb6be2a6d478e146dad8132edad3: Status 404 returned error can't find the container with id 930a0912a671e77d5b8a7f8c4db04302bff3bb6be2a6d478e146dad8132edad3 Mar 18 09:09:05.232180 master-0 kubenswrapper[26053]: I0318 09:09:05.232062 26053 generic.go:334] "Generic (PLEG): container finished" podID="1723c159-3187-46be-89bb-a529ca0c54db" containerID="32724c056de4657bf1580f9b9722f5f0804388890f96ca693367772644921120" exitCode=0 Mar 18 09:09:05.233118 master-0 kubenswrapper[26053]: I0318 09:09:05.232171 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"1723c159-3187-46be-89bb-a529ca0c54db","Type":"ContainerDied","Data":"32724c056de4657bf1580f9b9722f5f0804388890f96ca693367772644921120"} Mar 18 09:09:05.234960 master-0 kubenswrapper[26053]: I0318 09:09:05.234856 26053 status_manager.go:851] "Failed to get status for pod" podUID="1723c159-3187-46be-89bb-a529ca0c54db" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:05.234960 master-0 kubenswrapper[26053]: I0318 09:09:05.234910 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"ebbfbf2b56df0323ba118d68bfdad8b9","Type":"ContainerStarted","Data":"3880e884b2acf4cd0065ef7577133094d8ad3c43b1e6d0deadc965b17d76216b"} Mar 18 09:09:05.235153 master-0 kubenswrapper[26053]: I0318 09:09:05.234983 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"ebbfbf2b56df0323ba118d68bfdad8b9","Type":"ContainerStarted","Data":"930a0912a671e77d5b8a7f8c4db04302bff3bb6be2a6d478e146dad8132edad3"} Mar 18 09:09:05.236222 master-0 kubenswrapper[26053]: E0318 09:09:05.236144 26053 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:09:05.236222 master-0 kubenswrapper[26053]: I0318 09:09:05.236165 26053 status_manager.go:851] "Failed to get status for pod" podUID="1723c159-3187-46be-89bb-a529ca0c54db" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:06.464634 master-0 kubenswrapper[26053]: I0318 09:09:06.464588 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_7d5ce05b3d592e63f1f92202d52b9635/kube-apiserver-cert-syncer/0.log" Mar 18 09:09:06.466345 master-0 kubenswrapper[26053]: I0318 09:09:06.465758 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:09:06.467234 master-0 kubenswrapper[26053]: I0318 09:09:06.466889 26053 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:06.467531 master-0 kubenswrapper[26053]: I0318 09:09:06.467476 26053 status_manager.go:851] "Failed to get status for pod" podUID="1723c159-3187-46be-89bb-a529ca0c54db" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:06.593015 master-0 kubenswrapper[26053]: I0318 09:09:06.592953 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") pod \"7d5ce05b3d592e63f1f92202d52b9635\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " Mar 18 09:09:06.593015 master-0 kubenswrapper[26053]: I0318 09:09:06.593000 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") pod \"7d5ce05b3d592e63f1f92202d52b9635\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " Mar 18 09:09:06.593182 master-0 kubenswrapper[26053]: I0318 09:09:06.593104 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") pod \"7d5ce05b3d592e63f1f92202d52b9635\" (UID: \"7d5ce05b3d592e63f1f92202d52b9635\") " Mar 18 09:09:06.593730 master-0 kubenswrapper[26053]: I0318 09:09:06.593300 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "7d5ce05b3d592e63f1f92202d52b9635" (UID: "7d5ce05b3d592e63f1f92202d52b9635"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:09:06.593730 master-0 kubenswrapper[26053]: I0318 09:09:06.593355 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "7d5ce05b3d592e63f1f92202d52b9635" (UID: "7d5ce05b3d592e63f1f92202d52b9635"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:09:06.593730 master-0 kubenswrapper[26053]: I0318 09:09:06.593405 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "7d5ce05b3d592e63f1f92202d52b9635" (UID: "7d5ce05b3d592e63f1f92202d52b9635"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:09:06.593730 master-0 kubenswrapper[26053]: I0318 09:09:06.593604 26053 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:09:06.593730 master-0 kubenswrapper[26053]: I0318 09:09:06.593627 26053 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:09:06.593730 master-0 kubenswrapper[26053]: I0318 09:09:06.593642 26053 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7d5ce05b3d592e63f1f92202d52b9635-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:09:06.644209 master-0 kubenswrapper[26053]: I0318 09:09:06.641492 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 09:09:06.644209 master-0 kubenswrapper[26053]: I0318 09:09:06.643129 26053 status_manager.go:851] "Failed to get status for pod" podUID="1723c159-3187-46be-89bb-a529ca0c54db" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:06.644209 master-0 kubenswrapper[26053]: I0318 09:09:06.643992 26053 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:06.743014 master-0 kubenswrapper[26053]: I0318 09:09:06.742923 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d5ce05b3d592e63f1f92202d52b9635" path="/var/lib/kubelet/pods/7d5ce05b3d592e63f1f92202d52b9635/volumes" Mar 18 09:09:06.796311 master-0 kubenswrapper[26053]: I0318 09:09:06.796256 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1723c159-3187-46be-89bb-a529ca0c54db-var-lock\") pod \"1723c159-3187-46be-89bb-a529ca0c54db\" (UID: \"1723c159-3187-46be-89bb-a529ca0c54db\") " Mar 18 09:09:06.796429 master-0 kubenswrapper[26053]: I0318 09:09:06.796406 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1723c159-3187-46be-89bb-a529ca0c54db-kubelet-dir\") pod \"1723c159-3187-46be-89bb-a529ca0c54db\" (UID: \"1723c159-3187-46be-89bb-a529ca0c54db\") " Mar 18 09:09:06.796518 master-0 kubenswrapper[26053]: I0318 09:09:06.796494 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1723c159-3187-46be-89bb-a529ca0c54db-kube-api-access\") pod \"1723c159-3187-46be-89bb-a529ca0c54db\" (UID: \"1723c159-3187-46be-89bb-a529ca0c54db\") " Mar 18 09:09:06.797653 master-0 kubenswrapper[26053]: I0318 09:09:06.797605 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1723c159-3187-46be-89bb-a529ca0c54db-var-lock" (OuterVolumeSpecName: "var-lock") pod "1723c159-3187-46be-89bb-a529ca0c54db" (UID: "1723c159-3187-46be-89bb-a529ca0c54db"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:09:06.797780 master-0 kubenswrapper[26053]: I0318 09:09:06.797632 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1723c159-3187-46be-89bb-a529ca0c54db-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1723c159-3187-46be-89bb-a529ca0c54db" (UID: "1723c159-3187-46be-89bb-a529ca0c54db"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:09:06.801161 master-0 kubenswrapper[26053]: I0318 09:09:06.801111 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1723c159-3187-46be-89bb-a529ca0c54db-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1723c159-3187-46be-89bb-a529ca0c54db" (UID: "1723c159-3187-46be-89bb-a529ca0c54db"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:09:06.900061 master-0 kubenswrapper[26053]: I0318 09:09:06.900011 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1723c159-3187-46be-89bb-a529ca0c54db-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:09:06.900061 master-0 kubenswrapper[26053]: I0318 09:09:06.900044 26053 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1723c159-3187-46be-89bb-a529ca0c54db-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:09:06.900061 master-0 kubenswrapper[26053]: I0318 09:09:06.900054 26053 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1723c159-3187-46be-89bb-a529ca0c54db-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:09:07.255969 master-0 kubenswrapper[26053]: E0318 09:09:07.255916 26053 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:07.256544 master-0 kubenswrapper[26053]: E0318 09:09:07.256515 26053 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:07.257531 master-0 kubenswrapper[26053]: E0318 09:09:07.257284 26053 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:07.257917 master-0 kubenswrapper[26053]: E0318 09:09:07.257861 26053 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:07.258481 master-0 kubenswrapper[26053]: E0318 09:09:07.258449 26053 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:07.258541 master-0 kubenswrapper[26053]: I0318 09:09:07.258487 26053 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 18 09:09:07.258943 master-0 kubenswrapper[26053]: E0318 09:09:07.258910 26053 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 18 09:09:07.262426 master-0 kubenswrapper[26053]: I0318 09:09:07.262398 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Mar 18 09:09:07.262701 master-0 kubenswrapper[26053]: I0318 09:09:07.262622 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"1723c159-3187-46be-89bb-a529ca0c54db","Type":"ContainerDied","Data":"065e6e9a9bf3a7a541110af4dfc16ea75dfe81736047d4d0a53cd3fe069e12df"} Mar 18 09:09:07.262763 master-0 kubenswrapper[26053]: I0318 09:09:07.262736 26053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="065e6e9a9bf3a7a541110af4dfc16ea75dfe81736047d4d0a53cd3fe069e12df" Mar 18 09:09:07.266111 master-0 kubenswrapper[26053]: I0318 09:09:07.266080 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_7d5ce05b3d592e63f1f92202d52b9635/kube-apiserver-cert-syncer/0.log" Mar 18 09:09:07.267967 master-0 kubenswrapper[26053]: I0318 09:09:07.267654 26053 generic.go:334] "Generic (PLEG): container finished" podID="7d5ce05b3d592e63f1f92202d52b9635" containerID="bbf23a50550d9252ce1ebd70cefec9cc9a541dd37fc9b4ca4e7be5f197c8b9a4" exitCode=0 Mar 18 09:09:07.267967 master-0 kubenswrapper[26053]: I0318 09:09:07.267745 26053 scope.go:117] "RemoveContainer" containerID="dcee15c91521d767d4ea40bb222a0577ed62eb70a724af613b28117055e47b7a" Mar 18 09:09:07.267967 master-0 kubenswrapper[26053]: I0318 09:09:07.267789 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:09:07.269540 master-0 kubenswrapper[26053]: I0318 09:09:07.269483 26053 status_manager.go:851] "Failed to get status for pod" podUID="1723c159-3187-46be-89bb-a529ca0c54db" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:07.271820 master-0 kubenswrapper[26053]: I0318 09:09:07.271783 26053 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:07.273806 master-0 kubenswrapper[26053]: I0318 09:09:07.273783 26053 status_manager.go:851] "Failed to get status for pod" podUID="1723c159-3187-46be-89bb-a529ca0c54db" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:07.274693 master-0 kubenswrapper[26053]: I0318 09:09:07.274583 26053 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:07.283548 master-0 kubenswrapper[26053]: I0318 09:09:07.283501 26053 status_manager.go:851] "Failed to get status for pod" podUID="7d5ce05b3d592e63f1f92202d52b9635" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:07.284075 master-0 kubenswrapper[26053]: I0318 09:09:07.284047 26053 status_manager.go:851] "Failed to get status for pod" podUID="1723c159-3187-46be-89bb-a529ca0c54db" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:07.289457 master-0 kubenswrapper[26053]: I0318 09:09:07.289025 26053 scope.go:117] "RemoveContainer" containerID="b27fd72d2df6aa8485ccfdccefee7e08421525c7ce41e315130d4dbf81b08424" Mar 18 09:09:07.308163 master-0 kubenswrapper[26053]: I0318 09:09:07.307954 26053 scope.go:117] "RemoveContainer" containerID="2bffac0d3737efa9fd4e382b7d3bdc3841a1ab5fa52c8358b5630ed2ae18bb38" Mar 18 09:09:07.325042 master-0 kubenswrapper[26053]: I0318 09:09:07.325004 26053 scope.go:117] "RemoveContainer" containerID="d86be09cf0053c9e4298b054fc85c000e30ef5a3cb4ffb7f6bdab8fd73fc3e18" Mar 18 09:09:07.343853 master-0 kubenswrapper[26053]: I0318 09:09:07.343816 26053 scope.go:117] "RemoveContainer" containerID="bbf23a50550d9252ce1ebd70cefec9cc9a541dd37fc9b4ca4e7be5f197c8b9a4" Mar 18 09:09:07.359600 master-0 kubenswrapper[26053]: I0318 09:09:07.359550 26053 scope.go:117] "RemoveContainer" containerID="19bd526f5db2d8b53fc478a4d8138f1698660b5597c3be00125c76e7346152f0" Mar 18 09:09:07.380762 master-0 kubenswrapper[26053]: I0318 09:09:07.380708 26053 scope.go:117] "RemoveContainer" containerID="dcee15c91521d767d4ea40bb222a0577ed62eb70a724af613b28117055e47b7a" Mar 18 09:09:07.381623 master-0 kubenswrapper[26053]: E0318 09:09:07.381592 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcee15c91521d767d4ea40bb222a0577ed62eb70a724af613b28117055e47b7a\": container with ID starting with dcee15c91521d767d4ea40bb222a0577ed62eb70a724af613b28117055e47b7a not found: ID does not exist" containerID="dcee15c91521d767d4ea40bb222a0577ed62eb70a724af613b28117055e47b7a" Mar 18 09:09:07.381752 master-0 kubenswrapper[26053]: I0318 09:09:07.381628 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcee15c91521d767d4ea40bb222a0577ed62eb70a724af613b28117055e47b7a"} err="failed to get container status \"dcee15c91521d767d4ea40bb222a0577ed62eb70a724af613b28117055e47b7a\": rpc error: code = NotFound desc = could not find container \"dcee15c91521d767d4ea40bb222a0577ed62eb70a724af613b28117055e47b7a\": container with ID starting with dcee15c91521d767d4ea40bb222a0577ed62eb70a724af613b28117055e47b7a not found: ID does not exist" Mar 18 09:09:07.381752 master-0 kubenswrapper[26053]: I0318 09:09:07.381649 26053 scope.go:117] "RemoveContainer" containerID="b27fd72d2df6aa8485ccfdccefee7e08421525c7ce41e315130d4dbf81b08424" Mar 18 09:09:07.382051 master-0 kubenswrapper[26053]: E0318 09:09:07.382023 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b27fd72d2df6aa8485ccfdccefee7e08421525c7ce41e315130d4dbf81b08424\": container with ID starting with b27fd72d2df6aa8485ccfdccefee7e08421525c7ce41e315130d4dbf81b08424 not found: ID does not exist" containerID="b27fd72d2df6aa8485ccfdccefee7e08421525c7ce41e315130d4dbf81b08424" Mar 18 09:09:07.382051 master-0 kubenswrapper[26053]: I0318 09:09:07.382055 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b27fd72d2df6aa8485ccfdccefee7e08421525c7ce41e315130d4dbf81b08424"} err="failed to get container status \"b27fd72d2df6aa8485ccfdccefee7e08421525c7ce41e315130d4dbf81b08424\": rpc error: code = NotFound desc = could not find container \"b27fd72d2df6aa8485ccfdccefee7e08421525c7ce41e315130d4dbf81b08424\": container with ID starting with b27fd72d2df6aa8485ccfdccefee7e08421525c7ce41e315130d4dbf81b08424 not found: ID does not exist" Mar 18 09:09:07.382445 master-0 kubenswrapper[26053]: I0318 09:09:07.382077 26053 scope.go:117] "RemoveContainer" containerID="2bffac0d3737efa9fd4e382b7d3bdc3841a1ab5fa52c8358b5630ed2ae18bb38" Mar 18 09:09:07.382810 master-0 kubenswrapper[26053]: E0318 09:09:07.382698 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bffac0d3737efa9fd4e382b7d3bdc3841a1ab5fa52c8358b5630ed2ae18bb38\": container with ID starting with 2bffac0d3737efa9fd4e382b7d3bdc3841a1ab5fa52c8358b5630ed2ae18bb38 not found: ID does not exist" containerID="2bffac0d3737efa9fd4e382b7d3bdc3841a1ab5fa52c8358b5630ed2ae18bb38" Mar 18 09:09:07.382810 master-0 kubenswrapper[26053]: I0318 09:09:07.382733 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bffac0d3737efa9fd4e382b7d3bdc3841a1ab5fa52c8358b5630ed2ae18bb38"} err="failed to get container status \"2bffac0d3737efa9fd4e382b7d3bdc3841a1ab5fa52c8358b5630ed2ae18bb38\": rpc error: code = NotFound desc = could not find container \"2bffac0d3737efa9fd4e382b7d3bdc3841a1ab5fa52c8358b5630ed2ae18bb38\": container with ID starting with 2bffac0d3737efa9fd4e382b7d3bdc3841a1ab5fa52c8358b5630ed2ae18bb38 not found: ID does not exist" Mar 18 09:09:07.382810 master-0 kubenswrapper[26053]: I0318 09:09:07.382751 26053 scope.go:117] "RemoveContainer" containerID="d86be09cf0053c9e4298b054fc85c000e30ef5a3cb4ffb7f6bdab8fd73fc3e18" Mar 18 09:09:07.383016 master-0 kubenswrapper[26053]: E0318 09:09:07.382971 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d86be09cf0053c9e4298b054fc85c000e30ef5a3cb4ffb7f6bdab8fd73fc3e18\": container with ID starting with d86be09cf0053c9e4298b054fc85c000e30ef5a3cb4ffb7f6bdab8fd73fc3e18 not found: ID does not exist" containerID="d86be09cf0053c9e4298b054fc85c000e30ef5a3cb4ffb7f6bdab8fd73fc3e18" Mar 18 09:09:07.383016 master-0 kubenswrapper[26053]: I0318 09:09:07.382993 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d86be09cf0053c9e4298b054fc85c000e30ef5a3cb4ffb7f6bdab8fd73fc3e18"} err="failed to get container status \"d86be09cf0053c9e4298b054fc85c000e30ef5a3cb4ffb7f6bdab8fd73fc3e18\": rpc error: code = NotFound desc = could not find container \"d86be09cf0053c9e4298b054fc85c000e30ef5a3cb4ffb7f6bdab8fd73fc3e18\": container with ID starting with d86be09cf0053c9e4298b054fc85c000e30ef5a3cb4ffb7f6bdab8fd73fc3e18 not found: ID does not exist" Mar 18 09:09:07.383016 master-0 kubenswrapper[26053]: I0318 09:09:07.383010 26053 scope.go:117] "RemoveContainer" containerID="bbf23a50550d9252ce1ebd70cefec9cc9a541dd37fc9b4ca4e7be5f197c8b9a4" Mar 18 09:09:07.383215 master-0 kubenswrapper[26053]: E0318 09:09:07.383190 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbf23a50550d9252ce1ebd70cefec9cc9a541dd37fc9b4ca4e7be5f197c8b9a4\": container with ID starting with bbf23a50550d9252ce1ebd70cefec9cc9a541dd37fc9b4ca4e7be5f197c8b9a4 not found: ID does not exist" containerID="bbf23a50550d9252ce1ebd70cefec9cc9a541dd37fc9b4ca4e7be5f197c8b9a4" Mar 18 09:09:07.383275 master-0 kubenswrapper[26053]: I0318 09:09:07.383215 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbf23a50550d9252ce1ebd70cefec9cc9a541dd37fc9b4ca4e7be5f197c8b9a4"} err="failed to get container status \"bbf23a50550d9252ce1ebd70cefec9cc9a541dd37fc9b4ca4e7be5f197c8b9a4\": rpc error: code = NotFound desc = could not find container \"bbf23a50550d9252ce1ebd70cefec9cc9a541dd37fc9b4ca4e7be5f197c8b9a4\": container with ID starting with bbf23a50550d9252ce1ebd70cefec9cc9a541dd37fc9b4ca4e7be5f197c8b9a4 not found: ID does not exist" Mar 18 09:09:07.383275 master-0 kubenswrapper[26053]: I0318 09:09:07.383232 26053 scope.go:117] "RemoveContainer" containerID="19bd526f5db2d8b53fc478a4d8138f1698660b5597c3be00125c76e7346152f0" Mar 18 09:09:07.383559 master-0 kubenswrapper[26053]: E0318 09:09:07.383538 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19bd526f5db2d8b53fc478a4d8138f1698660b5597c3be00125c76e7346152f0\": container with ID starting with 19bd526f5db2d8b53fc478a4d8138f1698660b5597c3be00125c76e7346152f0 not found: ID does not exist" containerID="19bd526f5db2d8b53fc478a4d8138f1698660b5597c3be00125c76e7346152f0" Mar 18 09:09:07.383622 master-0 kubenswrapper[26053]: I0318 09:09:07.383575 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19bd526f5db2d8b53fc478a4d8138f1698660b5597c3be00125c76e7346152f0"} err="failed to get container status \"19bd526f5db2d8b53fc478a4d8138f1698660b5597c3be00125c76e7346152f0\": rpc error: code = NotFound desc = could not find container \"19bd526f5db2d8b53fc478a4d8138f1698660b5597c3be00125c76e7346152f0\": container with ID starting with 19bd526f5db2d8b53fc478a4d8138f1698660b5597c3be00125c76e7346152f0 not found: ID does not exist" Mar 18 09:09:07.460586 master-0 kubenswrapper[26053]: E0318 09:09:07.460179 26053 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 18 09:09:07.861465 master-0 kubenswrapper[26053]: E0318 09:09:07.861395 26053 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 18 09:09:08.420134 master-0 kubenswrapper[26053]: E0318 09:09:08.419928 26053 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-master-0.189de4637519d20b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:7d5ce05b3d592e63f1f92202d52b9635,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Killing,Message:Stopping container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-18 09:09:04.071365131 +0000 UTC m=+331.564716512,LastTimestamp:2026-03-18 09:09:04.071365131 +0000 UTC m=+331.564716512,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 18 09:09:08.663531 master-0 kubenswrapper[26053]: E0318 09:09:08.663421 26053 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 18 09:09:10.022584 master-0 kubenswrapper[26053]: I0318 09:09:10.022520 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:09:10.023362 master-0 kubenswrapper[26053]: I0318 09:09:10.023343 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:09:10.027561 master-0 kubenswrapper[26053]: I0318 09:09:10.027518 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:09:10.031482 master-0 kubenswrapper[26053]: I0318 09:09:10.031437 26053 status_manager.go:851] "Failed to get status for pod" podUID="1723c159-3187-46be-89bb-a529ca0c54db" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:10.032042 master-0 kubenswrapper[26053]: I0318 09:09:10.031993 26053 status_manager.go:851] "Failed to get status for pod" podUID="50e64936-f20b-4d5a-99ec-3264186272a3" pod="openshift-console/console-64b4885569-gmdjt" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-64b4885569-gmdjt\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:10.264848 master-0 kubenswrapper[26053]: E0318 09:09:10.264787 26053 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 18 09:09:10.305484 master-0 kubenswrapper[26053]: I0318 09:09:10.305341 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:09:10.306360 master-0 kubenswrapper[26053]: I0318 09:09:10.306288 26053 status_manager.go:851] "Failed to get status for pod" podUID="50e64936-f20b-4d5a-99ec-3264186272a3" pod="openshift-console/console-64b4885569-gmdjt" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-64b4885569-gmdjt\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:10.307063 master-0 kubenswrapper[26053]: I0318 09:09:10.306980 26053 status_manager.go:851] "Failed to get status for pod" podUID="1723c159-3187-46be-89bb-a529ca0c54db" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:12.745311 master-0 kubenswrapper[26053]: I0318 09:09:12.745177 26053 status_manager.go:851] "Failed to get status for pod" podUID="50e64936-f20b-4d5a-99ec-3264186272a3" pod="openshift-console/console-64b4885569-gmdjt" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-64b4885569-gmdjt\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:12.746801 master-0 kubenswrapper[26053]: I0318 09:09:12.746707 26053 status_manager.go:851] "Failed to get status for pod" podUID="1723c159-3187-46be-89bb-a529ca0c54db" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:13.465894 master-0 kubenswrapper[26053]: E0318 09:09:13.465799 26053 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 18 09:09:15.730295 master-0 kubenswrapper[26053]: I0318 09:09:15.730215 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:09:15.735938 master-0 kubenswrapper[26053]: I0318 09:09:15.735841 26053 status_manager.go:851] "Failed to get status for pod" podUID="50e64936-f20b-4d5a-99ec-3264186272a3" pod="openshift-console/console-64b4885569-gmdjt" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-64b4885569-gmdjt\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:15.736457 master-0 kubenswrapper[26053]: I0318 09:09:15.736398 26053 status_manager.go:851] "Failed to get status for pod" podUID="1723c159-3187-46be-89bb-a529ca0c54db" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:15.755519 master-0 kubenswrapper[26053]: I0318 09:09:15.755446 26053 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7645e18a-f03c-4c7c-8b69-ba6ccc8743f2" Mar 18 09:09:15.755519 master-0 kubenswrapper[26053]: I0318 09:09:15.755501 26053 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7645e18a-f03c-4c7c-8b69-ba6ccc8743f2" Mar 18 09:09:15.756848 master-0 kubenswrapper[26053]: E0318 09:09:15.756760 26053 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:09:15.757801 master-0 kubenswrapper[26053]: I0318 09:09:15.757744 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:09:15.793542 master-0 kubenswrapper[26053]: W0318 09:09:15.792337 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod274c4bebf95a655851b2cf276fe43ef7.slice/crio-44d8b600434f736e0bc20e958a18b6aa89d2aeaf78508a15f1f6ccea49fda16f WatchSource:0}: Error finding container 44d8b600434f736e0bc20e958a18b6aa89d2aeaf78508a15f1f6ccea49fda16f: Status 404 returned error can't find the container with id 44d8b600434f736e0bc20e958a18b6aa89d2aeaf78508a15f1f6ccea49fda16f Mar 18 09:09:16.146489 master-0 kubenswrapper[26053]: E0318 09:09:16.146045 26053 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T09:09:16Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T09:09:16Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T09:09:16Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-18T09:09:16Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ddc5283caf2ced75a94ddf0e8a43c431889692007e8a875a187b25c35b45a9e2\\\"],\\\"sizeBytes\\\":2895807090},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1a25ef962e8f26b0d756aa0987d45d570c0afb2e2d2507cf2fee734792b95657\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:688d991fddd7c0947af40f1c2e803a9a4ccef32b897e1bb3447e76c87ea4b753\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1746519514},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\\\"],\\\"sizeBytes\\\":1637455533},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:86833de447f25d1d0fc15ed5460c5068cc48b18b78b8108304c5b5fd1dff04ab\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a41181d28dfacb78bea3690c390c965912300bc666e6e31a54a9382dd0329758\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1251896539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36\\\"],\\\"sizeBytes\\\":1238100502},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:15e3bdacc64320529707b0286fcaaf0059f0f5eaaafacf2c4bfee4b90be77eee\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:26b5f4283e14ca039e027e637271bdbf1f92abf0bc56c32b01252e8eb9a95071\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1223649493},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015\\\"],\\\"sizeBytes\\\":991832673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\"],\\\"sizeBytes\\\":943841779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1\\\"],\\\"sizeBytes\\\":918289953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f2c59d19eb73ad5c0f93b0a63003c1885f5297959c9c45b401d1a74aea6e76\\\"],\\\"sizeBytes\\\":880382887},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa\\\"],\\\"sizeBytes\\\":876160834},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578\\\"],\\\"sizeBytes\\\":862657321},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:de91abd5ad76fb491881a75a0feb4b8ca5600ceb5e15a4b0b687ada01ea0a44c\\\"],\\\"sizeBytes\\\":862205633},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a9e8da5c6114f062b814936d4db7a47a04d248e160d6bb28ad4e4a081496ee4\\\"],\\\"sizeBytes\\\":772943435},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016\\\"],\\\"sizeBytes\\\":687949580},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d\\\"],\\\"sizeBytes\\\":683195416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a\\\"],\\\"sizeBytes\\\":677942383},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5bbb8535e2496de8389585ebbe696e7d7b9bad2b27785ad8a30a0fc683b0a22d\\\"],\\\"sizeBytes\\\":633877280},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4\\\"],\\\"sizeBytes\\\":621648710},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f3038df8df25746bb5095296d4e5740f2356f85c1ed8d43f1b3d281e294826e5\\\"],\\\"sizeBytes\\\":605698193},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998\\\"],\\\"sizeBytes\\\":589386806},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55\\\"],\\\"sizeBytes\\\":582154903},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982\\\"],\\\"sizeBytes\\\":558211175},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:112a03f2411f871cdaca5f20daef71024dac710113d5f30897117a5a02f6b6f5\\\"],\\\"sizeBytes\\\":557428271},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89\\\"],\\\"sizeBytes\\\":548752816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\"],\\\"sizeBytes\\\":529326739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946\\\"],\\\"sizeBytes\\\":528956487},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"],\\\"sizeBytes\\\":518384969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1\\\"],\\\"sizeBytes\\\":517999161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\"],\\\"sizeBytes\\\":514984269},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30a2f97d7785ce8b0ea5115e67c4554b64adefbc7856bcf6f4fe6cc7e938a310\\\"],\\\"sizeBytes\\\":513582374},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427\\\"],\\\"sizeBytes\\\":513221333},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"],\\\"sizeBytes\\\":512274055},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98bf5467a01195e20aeea7d6f0b130ddacc00b73bc5312253b8c34e7208538f8\\\"],\\\"sizeBytes\\\":512235769},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098\\\"],\\\"sizeBytes\\\":511227324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11\\\"],\\\"sizeBytes\\\":511164375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458\\\"],\\\"sizeBytes\\\":508888171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"],\\\"sizeBytes\\\":508544745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71\\\"],\\\"sizeBytes\\\":507972093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3\\\"],\\\"sizeBytes\\\":506480167},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302\\\"],\\\"sizeBytes\\\":506395599},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634\\\"],\\\"sizeBytes\\\":505345991},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\\\"],\\\"sizeBytes\\\":505246690},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1973d56a1097a48ea0ebf2c4dbae1ed86fa67bb0116f4962f7720d48aa337d27\\\"],\\\"sizeBytes\\\":504662731},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252\\\"],\\\"sizeBytes\\\":504625081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf72297fee61ec9950f6868881ad3e84be8692ca08f084b3d155d93a766c0823\\\"],\\\"sizeBytes\\\":502712961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69\\\"],\\\"sizeBytes\\\":495994673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023\\\"],\\\"sizeBytes\\\":495065340},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:002dfb86e17ad8f5cc232a7d2dce183b23335c8ecb7e7d31dcf3e4446b390777\\\"],\\\"sizeBytes\\\":487159945}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:16.147593 master-0 kubenswrapper[26053]: E0318 09:09:16.147507 26053 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:16.148342 master-0 kubenswrapper[26053]: E0318 09:09:16.148275 26053 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:16.149316 master-0 kubenswrapper[26053]: E0318 09:09:16.149231 26053 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:16.150209 master-0 kubenswrapper[26053]: E0318 09:09:16.150155 26053 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:16.150209 master-0 kubenswrapper[26053]: E0318 09:09:16.150196 26053 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 18 09:09:16.353427 master-0 kubenswrapper[26053]: I0318 09:09:16.353279 26053 generic.go:334] "Generic (PLEG): container finished" podID="274c4bebf95a655851b2cf276fe43ef7" containerID="4023ed9206352a3ddc8a4cbf397fee07b3674232206ef1039ba57687fe0be09a" exitCode=0 Mar 18 09:09:16.353427 master-0 kubenswrapper[26053]: I0318 09:09:16.353341 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerDied","Data":"4023ed9206352a3ddc8a4cbf397fee07b3674232206ef1039ba57687fe0be09a"} Mar 18 09:09:16.353427 master-0 kubenswrapper[26053]: I0318 09:09:16.353373 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"44d8b600434f736e0bc20e958a18b6aa89d2aeaf78508a15f1f6ccea49fda16f"} Mar 18 09:09:16.353741 master-0 kubenswrapper[26053]: I0318 09:09:16.353698 26053 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7645e18a-f03c-4c7c-8b69-ba6ccc8743f2" Mar 18 09:09:16.353741 master-0 kubenswrapper[26053]: I0318 09:09:16.353716 26053 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7645e18a-f03c-4c7c-8b69-ba6ccc8743f2" Mar 18 09:09:16.354444 master-0 kubenswrapper[26053]: E0318 09:09:16.354397 26053 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:09:16.354662 master-0 kubenswrapper[26053]: I0318 09:09:16.354447 26053 status_manager.go:851] "Failed to get status for pod" podUID="50e64936-f20b-4d5a-99ec-3264186272a3" pod="openshift-console/console-64b4885569-gmdjt" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-64b4885569-gmdjt\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:16.355172 master-0 kubenswrapper[26053]: I0318 09:09:16.355117 26053 status_manager.go:851] "Failed to get status for pod" podUID="1723c159-3187-46be-89bb-a529ca0c54db" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 18 09:09:17.383531 master-0 kubenswrapper[26053]: I0318 09:09:17.383475 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"c01655c0984bfad1f660a02e4648f804128ebdc87c3f8f253242338c0d207f3f"} Mar 18 09:09:17.383531 master-0 kubenswrapper[26053]: I0318 09:09:17.383521 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"b67b56627d1be72d151feebc6042ef25894077afa07e90ba1da46b960d7f20a1"} Mar 18 09:09:18.393741 master-0 kubenswrapper[26053]: I0318 09:09:18.393674 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"9ceb2cd2934dee5da44ce663b6e6dd4814a79d3bfd93374f626a6307b60b16e9"} Mar 18 09:09:19.402991 master-0 kubenswrapper[26053]: I0318 09:09:19.402940 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_60c2ba061fb7c3edad3900526541ee3c/kube-controller-manager/0.log" Mar 18 09:09:19.403467 master-0 kubenswrapper[26053]: I0318 09:09:19.402999 26053 generic.go:334] "Generic (PLEG): container finished" podID="60c2ba061fb7c3edad3900526541ee3c" containerID="ed8bdc24b42ed8397f238b0c55ea4555545fbf502b6a47a78f76d63cdd9cc08f" exitCode=1 Mar 18 09:09:19.403467 master-0 kubenswrapper[26053]: I0318 09:09:19.403055 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"60c2ba061fb7c3edad3900526541ee3c","Type":"ContainerDied","Data":"ed8bdc24b42ed8397f238b0c55ea4555545fbf502b6a47a78f76d63cdd9cc08f"} Mar 18 09:09:19.403643 master-0 kubenswrapper[26053]: I0318 09:09:19.403599 26053 scope.go:117] "RemoveContainer" containerID="ed8bdc24b42ed8397f238b0c55ea4555545fbf502b6a47a78f76d63cdd9cc08f" Mar 18 09:09:19.406629 master-0 kubenswrapper[26053]: I0318 09:09:19.406555 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"cffc6bc573a348f589ab3114f7008b62958af8d2ad8b94f1c55b59491e2f68c6"} Mar 18 09:09:19.406697 master-0 kubenswrapper[26053]: I0318 09:09:19.406639 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"274c4bebf95a655851b2cf276fe43ef7","Type":"ContainerStarted","Data":"ccf25e7a841d82332e585db47de3f0ac59c49ec4c3b3d4a9078a5c1c78e23491"} Mar 18 09:09:19.406993 master-0 kubenswrapper[26053]: I0318 09:09:19.406965 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:09:19.407137 master-0 kubenswrapper[26053]: I0318 09:09:19.407115 26053 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7645e18a-f03c-4c7c-8b69-ba6ccc8743f2" Mar 18 09:09:19.407209 master-0 kubenswrapper[26053]: I0318 09:09:19.407198 26053 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7645e18a-f03c-4c7c-8b69-ba6ccc8743f2" Mar 18 09:09:20.424868 master-0 kubenswrapper[26053]: I0318 09:09:20.424762 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_60c2ba061fb7c3edad3900526541ee3c/kube-controller-manager/0.log" Mar 18 09:09:20.425851 master-0 kubenswrapper[26053]: I0318 09:09:20.425009 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"60c2ba061fb7c3edad3900526541ee3c","Type":"ContainerStarted","Data":"e2312ca4bc36c3067315c67ea5484812afd6cc65bceaf66493a13a06e24d3095"} Mar 18 09:09:20.758100 master-0 kubenswrapper[26053]: I0318 09:09:20.758029 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:09:20.758100 master-0 kubenswrapper[26053]: I0318 09:09:20.758103 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:09:20.772815 master-0 kubenswrapper[26053]: I0318 09:09:20.772702 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:09:23.809641 master-0 kubenswrapper[26053]: I0318 09:09:23.809302 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:09:23.809641 master-0 kubenswrapper[26053]: I0318 09:09:23.809545 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:09:23.818045 master-0 kubenswrapper[26053]: I0318 09:09:23.817970 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:09:24.427722 master-0 kubenswrapper[26053]: I0318 09:09:24.427635 26053 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:09:24.465271 master-0 kubenswrapper[26053]: I0318 09:09:24.465190 26053 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7645e18a-f03c-4c7c-8b69-ba6ccc8743f2" Mar 18 09:09:24.465271 master-0 kubenswrapper[26053]: I0318 09:09:24.465246 26053 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7645e18a-f03c-4c7c-8b69-ba6ccc8743f2" Mar 18 09:09:24.474418 master-0 kubenswrapper[26053]: I0318 09:09:24.474355 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:09:24.569046 master-0 kubenswrapper[26053]: I0318 09:09:24.568823 26053 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="274c4bebf95a655851b2cf276fe43ef7" podUID="2b7c7f11-435d-4688-8f8b-7efe2bda6bae" Mar 18 09:09:25.475328 master-0 kubenswrapper[26053]: I0318 09:09:25.475270 26053 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7645e18a-f03c-4c7c-8b69-ba6ccc8743f2" Mar 18 09:09:25.475328 master-0 kubenswrapper[26053]: I0318 09:09:25.475308 26053 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7645e18a-f03c-4c7c-8b69-ba6ccc8743f2" Mar 18 09:09:32.761613 master-0 kubenswrapper[26053]: I0318 09:09:32.761276 26053 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="274c4bebf95a655851b2cf276fe43ef7" podUID="2b7c7f11-435d-4688-8f8b-7efe2bda6bae" Mar 18 09:09:33.703552 master-0 kubenswrapper[26053]: I0318 09:09:33.703450 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-xhpr4" Mar 18 09:09:33.762352 master-0 kubenswrapper[26053]: I0318 09:09:33.762277 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 18 09:09:33.817099 master-0 kubenswrapper[26053]: I0318 09:09:33.817050 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:09:34.581887 master-0 kubenswrapper[26053]: I0318 09:09:34.581824 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 18 09:09:34.586627 master-0 kubenswrapper[26053]: I0318 09:09:34.586592 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-kvnts" Mar 18 09:09:34.631024 master-0 kubenswrapper[26053]: I0318 09:09:34.630957 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-mbtdj" Mar 18 09:09:34.939808 master-0 kubenswrapper[26053]: I0318 09:09:34.939623 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 18 09:09:35.137720 master-0 kubenswrapper[26053]: I0318 09:09:35.137615 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-l7k6v" Mar 18 09:09:35.169916 master-0 kubenswrapper[26053]: I0318 09:09:35.169772 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-kldf7" Mar 18 09:09:35.191805 master-0 kubenswrapper[26053]: I0318 09:09:35.191641 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 18 09:09:35.389919 master-0 kubenswrapper[26053]: I0318 09:09:35.389861 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 18 09:09:35.443797 master-0 kubenswrapper[26053]: I0318 09:09:35.443633 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 18 09:09:35.908521 master-0 kubenswrapper[26053]: I0318 09:09:35.908433 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 18 09:09:35.984605 master-0 kubenswrapper[26053]: I0318 09:09:35.984490 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 18 09:09:36.131718 master-0 kubenswrapper[26053]: I0318 09:09:36.131623 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 18 09:09:36.204694 master-0 kubenswrapper[26053]: I0318 09:09:36.204479 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 18 09:09:36.239609 master-0 kubenswrapper[26053]: I0318 09:09:36.239524 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 18 09:09:36.302061 master-0 kubenswrapper[26053]: I0318 09:09:36.301992 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 18 09:09:36.418423 master-0 kubenswrapper[26053]: I0318 09:09:36.418334 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 18 09:09:36.544915 master-0 kubenswrapper[26053]: I0318 09:09:36.544837 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 18 09:09:36.615996 master-0 kubenswrapper[26053]: I0318 09:09:36.615908 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 18 09:09:36.669358 master-0 kubenswrapper[26053]: I0318 09:09:36.669283 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 18 09:09:36.695928 master-0 kubenswrapper[26053]: I0318 09:09:36.695875 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 18 09:09:36.857888 master-0 kubenswrapper[26053]: I0318 09:09:36.857716 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 18 09:09:36.919191 master-0 kubenswrapper[26053]: I0318 09:09:36.919126 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-khzbd" Mar 18 09:09:36.950641 master-0 kubenswrapper[26053]: I0318 09:09:36.950546 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 18 09:09:36.981378 master-0 kubenswrapper[26053]: I0318 09:09:36.981285 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 18 09:09:37.033153 master-0 kubenswrapper[26053]: I0318 09:09:37.033097 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 18 09:09:37.093803 master-0 kubenswrapper[26053]: I0318 09:09:37.093737 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 18 09:09:37.244377 master-0 kubenswrapper[26053]: I0318 09:09:37.244322 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 18 09:09:37.318340 master-0 kubenswrapper[26053]: I0318 09:09:37.318276 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 18 09:09:37.399134 master-0 kubenswrapper[26053]: I0318 09:09:37.398937 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 18 09:09:37.422469 master-0 kubenswrapper[26053]: I0318 09:09:37.422413 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 18 09:09:37.465065 master-0 kubenswrapper[26053]: I0318 09:09:37.464734 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 18 09:09:37.487903 master-0 kubenswrapper[26053]: I0318 09:09:37.487841 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 18 09:09:37.496389 master-0 kubenswrapper[26053]: I0318 09:09:37.496252 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 18 09:09:37.576755 master-0 kubenswrapper[26053]: I0318 09:09:37.576683 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 18 09:09:37.629597 master-0 kubenswrapper[26053]: I0318 09:09:37.629524 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 18 09:09:37.648684 master-0 kubenswrapper[26053]: I0318 09:09:37.648644 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-l4xp6" Mar 18 09:09:37.678929 master-0 kubenswrapper[26053]: I0318 09:09:37.678878 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 18 09:09:37.682590 master-0 kubenswrapper[26053]: I0318 09:09:37.682526 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 18 09:09:37.692203 master-0 kubenswrapper[26053]: I0318 09:09:37.692175 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 18 09:09:37.693780 master-0 kubenswrapper[26053]: I0318 09:09:37.693737 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-9ugglqvgh687f" Mar 18 09:09:37.696631 master-0 kubenswrapper[26053]: I0318 09:09:37.696607 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 18 09:09:37.743780 master-0 kubenswrapper[26053]: I0318 09:09:37.743722 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 18 09:09:37.793676 master-0 kubenswrapper[26053]: I0318 09:09:37.793482 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 18 09:09:37.858362 master-0 kubenswrapper[26053]: I0318 09:09:37.858294 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 18 09:09:37.895722 master-0 kubenswrapper[26053]: I0318 09:09:37.895659 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 18 09:09:37.957991 master-0 kubenswrapper[26053]: I0318 09:09:37.957925 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 18 09:09:37.990000 master-0 kubenswrapper[26053]: I0318 09:09:37.989937 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-vvwvf" Mar 18 09:09:37.995470 master-0 kubenswrapper[26053]: I0318 09:09:37.995410 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 18 09:09:38.037794 master-0 kubenswrapper[26053]: I0318 09:09:38.037737 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-gfnn4" Mar 18 09:09:38.040221 master-0 kubenswrapper[26053]: I0318 09:09:38.040168 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 18 09:09:38.040754 master-0 kubenswrapper[26053]: I0318 09:09:38.040730 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 18 09:09:38.091687 master-0 kubenswrapper[26053]: I0318 09:09:38.091540 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 18 09:09:38.195825 master-0 kubenswrapper[26053]: I0318 09:09:38.195744 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 18 09:09:38.265129 master-0 kubenswrapper[26053]: I0318 09:09:38.265035 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-658wv" Mar 18 09:09:38.299900 master-0 kubenswrapper[26053]: I0318 09:09:38.299840 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 18 09:09:38.323144 master-0 kubenswrapper[26053]: I0318 09:09:38.323019 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 18 09:09:38.351974 master-0 kubenswrapper[26053]: I0318 09:09:38.351793 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 18 09:09:38.450672 master-0 kubenswrapper[26053]: I0318 09:09:38.450589 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 18 09:09:38.507542 master-0 kubenswrapper[26053]: I0318 09:09:38.507489 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 18 09:09:38.532316 master-0 kubenswrapper[26053]: I0318 09:09:38.532220 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 09:09:38.554479 master-0 kubenswrapper[26053]: I0318 09:09:38.554410 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 18 09:09:38.649859 master-0 kubenswrapper[26053]: I0318 09:09:38.649722 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 18 09:09:38.691231 master-0 kubenswrapper[26053]: I0318 09:09:38.691140 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 18 09:09:38.851010 master-0 kubenswrapper[26053]: I0318 09:09:38.850924 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 18 09:09:38.884158 master-0 kubenswrapper[26053]: I0318 09:09:38.884099 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 18 09:09:38.934243 master-0 kubenswrapper[26053]: I0318 09:09:38.933698 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 18 09:09:38.934243 master-0 kubenswrapper[26053]: I0318 09:09:38.933925 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 18 09:09:39.067267 master-0 kubenswrapper[26053]: I0318 09:09:39.067052 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 18 09:09:39.077824 master-0 kubenswrapper[26053]: I0318 09:09:39.077735 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-pj2bk" Mar 18 09:09:39.082158 master-0 kubenswrapper[26053]: I0318 09:09:39.081983 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 18 09:09:39.173129 master-0 kubenswrapper[26053]: I0318 09:09:39.173030 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 18 09:09:39.192203 master-0 kubenswrapper[26053]: I0318 09:09:39.192029 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 18 09:09:39.480969 master-0 kubenswrapper[26053]: I0318 09:09:39.480910 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-s4fhp" Mar 18 09:09:39.495080 master-0 kubenswrapper[26053]: I0318 09:09:39.495029 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 18 09:09:39.501557 master-0 kubenswrapper[26053]: I0318 09:09:39.501512 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 18 09:09:39.681291 master-0 kubenswrapper[26053]: I0318 09:09:39.681167 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 18 09:09:39.686090 master-0 kubenswrapper[26053]: I0318 09:09:39.686029 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-jqmlx" Mar 18 09:09:39.774645 master-0 kubenswrapper[26053]: I0318 09:09:39.774437 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 18 09:09:39.779095 master-0 kubenswrapper[26053]: I0318 09:09:39.779039 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 18 09:09:39.789304 master-0 kubenswrapper[26053]: I0318 09:09:39.789243 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 18 09:09:39.842769 master-0 kubenswrapper[26053]: I0318 09:09:39.842376 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-m9g5m" Mar 18 09:09:39.845953 master-0 kubenswrapper[26053]: I0318 09:09:39.845901 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 18 09:09:39.893116 master-0 kubenswrapper[26053]: I0318 09:09:39.893060 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 18 09:09:39.893960 master-0 kubenswrapper[26053]: I0318 09:09:39.893922 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-9xv2f" Mar 18 09:09:40.006803 master-0 kubenswrapper[26053]: I0318 09:09:40.006625 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 18 09:09:40.085387 master-0 kubenswrapper[26053]: I0318 09:09:40.085259 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 18 09:09:40.095886 master-0 kubenswrapper[26053]: I0318 09:09:40.095796 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 18 09:09:40.116559 master-0 kubenswrapper[26053]: I0318 09:09:40.116459 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 18 09:09:40.125304 master-0 kubenswrapper[26053]: I0318 09:09:40.125220 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 18 09:09:40.138613 master-0 kubenswrapper[26053]: I0318 09:09:40.137877 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 18 09:09:40.181679 master-0 kubenswrapper[26053]: I0318 09:09:40.181538 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 18 09:09:40.324019 master-0 kubenswrapper[26053]: I0318 09:09:40.323949 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 18 09:09:40.337364 master-0 kubenswrapper[26053]: I0318 09:09:40.337242 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 18 09:09:40.344265 master-0 kubenswrapper[26053]: I0318 09:09:40.344231 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 18 09:09:40.357018 master-0 kubenswrapper[26053]: I0318 09:09:40.356971 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 18 09:09:40.358994 master-0 kubenswrapper[26053]: I0318 09:09:40.358958 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 18 09:09:40.478877 master-0 kubenswrapper[26053]: I0318 09:09:40.478785 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 18 09:09:40.544508 master-0 kubenswrapper[26053]: I0318 09:09:40.544448 26053 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 18 09:09:40.567272 master-0 kubenswrapper[26053]: I0318 09:09:40.567219 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 18 09:09:40.631915 master-0 kubenswrapper[26053]: I0318 09:09:40.631768 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 18 09:09:40.654716 master-0 kubenswrapper[26053]: I0318 09:09:40.654624 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-6tjc5u8lektnd" Mar 18 09:09:40.699207 master-0 kubenswrapper[26053]: I0318 09:09:40.699104 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 18 09:09:40.703187 master-0 kubenswrapper[26053]: I0318 09:09:40.703047 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-29bbg" Mar 18 09:09:40.728964 master-0 kubenswrapper[26053]: I0318 09:09:40.728858 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 18 09:09:40.748162 master-0 kubenswrapper[26053]: I0318 09:09:40.748095 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 18 09:09:40.772930 master-0 kubenswrapper[26053]: I0318 09:09:40.772863 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 18 09:09:40.785861 master-0 kubenswrapper[26053]: I0318 09:09:40.785774 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 18 09:09:40.801192 master-0 kubenswrapper[26053]: I0318 09:09:40.801115 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 18 09:09:40.815909 master-0 kubenswrapper[26053]: I0318 09:09:40.815856 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-4lqimvakop077" Mar 18 09:09:40.856892 master-0 kubenswrapper[26053]: I0318 09:09:40.856801 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 18 09:09:40.865709 master-0 kubenswrapper[26053]: I0318 09:09:40.865658 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 18 09:09:40.917037 master-0 kubenswrapper[26053]: I0318 09:09:40.916330 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 18 09:09:41.017769 master-0 kubenswrapper[26053]: I0318 09:09:41.017636 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 18 09:09:41.028640 master-0 kubenswrapper[26053]: I0318 09:09:41.028471 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 18 09:09:41.079681 master-0 kubenswrapper[26053]: I0318 09:09:41.079602 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 18 09:09:41.100608 master-0 kubenswrapper[26053]: I0318 09:09:41.100530 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 18 09:09:41.115763 master-0 kubenswrapper[26053]: I0318 09:09:41.115690 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 18 09:09:41.259025 master-0 kubenswrapper[26053]: I0318 09:09:41.258937 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6mthr" Mar 18 09:09:41.342473 master-0 kubenswrapper[26053]: I0318 09:09:41.342367 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 18 09:09:41.346377 master-0 kubenswrapper[26053]: I0318 09:09:41.346316 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 18 09:09:41.393934 master-0 kubenswrapper[26053]: I0318 09:09:41.393867 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 18 09:09:41.478520 master-0 kubenswrapper[26053]: I0318 09:09:41.478434 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 18 09:09:41.502389 master-0 kubenswrapper[26053]: I0318 09:09:41.502258 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 18 09:09:41.523851 master-0 kubenswrapper[26053]: I0318 09:09:41.523662 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 18 09:09:41.611812 master-0 kubenswrapper[26053]: I0318 09:09:41.611710 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 18 09:09:41.772969 master-0 kubenswrapper[26053]: I0318 09:09:41.772839 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 18 09:09:41.787177 master-0 kubenswrapper[26053]: I0318 09:09:41.787016 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 18 09:09:41.809623 master-0 kubenswrapper[26053]: I0318 09:09:41.809446 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 18 09:09:41.812508 master-0 kubenswrapper[26053]: I0318 09:09:41.812406 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 18 09:09:41.874137 master-0 kubenswrapper[26053]: I0318 09:09:41.874048 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 18 09:09:41.895210 master-0 kubenswrapper[26053]: I0318 09:09:41.895139 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 18 09:09:41.897464 master-0 kubenswrapper[26053]: I0318 09:09:41.897404 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-fhncm" Mar 18 09:09:41.913227 master-0 kubenswrapper[26053]: I0318 09:09:41.913172 26053 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 18 09:09:41.920934 master-0 kubenswrapper[26053]: I0318 09:09:41.920893 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 09:09:41.921238 master-0 kubenswrapper[26053]: I0318 09:09:41.921217 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 18 09:09:41.921379 master-0 kubenswrapper[26053]: I0318 09:09:41.921357 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d9cb85584-jfkbk"] Mar 18 09:09:41.921755 master-0 kubenswrapper[26053]: I0318 09:09:41.921725 26053 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7645e18a-f03c-4c7c-8b69-ba6ccc8743f2" Mar 18 09:09:41.921909 master-0 kubenswrapper[26053]: I0318 09:09:41.921886 26053 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="7645e18a-f03c-4c7c-8b69-ba6ccc8743f2" Mar 18 09:09:41.927066 master-0 kubenswrapper[26053]: I0318 09:09:41.927030 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 18 09:09:41.961774 master-0 kubenswrapper[26053]: I0318 09:09:41.961721 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 18 09:09:41.965488 master-0 kubenswrapper[26053]: I0318 09:09:41.965435 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 18 09:09:41.973105 master-0 kubenswrapper[26053]: I0318 09:09:41.972546 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=17.972474266 podStartE2EDuration="17.972474266s" podCreationTimestamp="2026-03-18 09:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:09:41.945875725 +0000 UTC m=+369.439227116" watchObservedRunningTime="2026-03-18 09:09:41.972474266 +0000 UTC m=+369.465825677" Mar 18 09:09:42.066101 master-0 kubenswrapper[26053]: I0318 09:09:42.065992 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 18 09:09:42.089289 master-0 kubenswrapper[26053]: I0318 09:09:42.089224 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 18 09:09:42.255002 master-0 kubenswrapper[26053]: I0318 09:09:42.254952 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 18 09:09:42.292718 master-0 kubenswrapper[26053]: I0318 09:09:42.292659 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 18 09:09:42.329273 master-0 kubenswrapper[26053]: I0318 09:09:42.329126 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 18 09:09:42.338741 master-0 kubenswrapper[26053]: I0318 09:09:42.338681 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 18 09:09:42.366826 master-0 kubenswrapper[26053]: I0318 09:09:42.366751 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 18 09:09:42.430171 master-0 kubenswrapper[26053]: I0318 09:09:42.430096 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 18 09:09:42.461359 master-0 kubenswrapper[26053]: I0318 09:09:42.461161 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 18 09:09:42.500956 master-0 kubenswrapper[26053]: I0318 09:09:42.500865 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 18 09:09:42.568739 master-0 kubenswrapper[26053]: I0318 09:09:42.568652 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 18 09:09:42.575242 master-0 kubenswrapper[26053]: I0318 09:09:42.575183 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 18 09:09:42.631867 master-0 kubenswrapper[26053]: I0318 09:09:42.631673 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 18 09:09:42.736677 master-0 kubenswrapper[26053]: I0318 09:09:42.736536 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 18 09:09:42.796867 master-0 kubenswrapper[26053]: I0318 09:09:42.796754 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 18 09:09:42.812358 master-0 kubenswrapper[26053]: I0318 09:09:42.812248 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 18 09:09:42.896055 master-0 kubenswrapper[26053]: I0318 09:09:42.895862 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 18 09:09:42.911434 master-0 kubenswrapper[26053]: I0318 09:09:42.911355 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 18 09:09:42.926679 master-0 kubenswrapper[26053]: I0318 09:09:42.926562 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 18 09:09:43.009288 master-0 kubenswrapper[26053]: I0318 09:09:43.009168 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 18 09:09:43.043236 master-0 kubenswrapper[26053]: I0318 09:09:43.043133 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 18 09:09:43.047106 master-0 kubenswrapper[26053]: I0318 09:09:43.047038 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 18 09:09:43.085153 master-0 kubenswrapper[26053]: I0318 09:09:43.084114 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 18 09:09:43.085153 master-0 kubenswrapper[26053]: I0318 09:09:43.084776 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 18 09:09:43.089468 master-0 kubenswrapper[26053]: I0318 09:09:43.089400 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 18 09:09:43.216744 master-0 kubenswrapper[26053]: I0318 09:09:43.216685 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 18 09:09:43.235921 master-0 kubenswrapper[26053]: I0318 09:09:43.235862 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 18 09:09:43.320105 master-0 kubenswrapper[26053]: I0318 09:09:43.320062 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 18 09:09:43.338895 master-0 kubenswrapper[26053]: I0318 09:09:43.338832 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 18 09:09:43.523791 master-0 kubenswrapper[26053]: I0318 09:09:43.523639 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 18 09:09:43.537447 master-0 kubenswrapper[26053]: I0318 09:09:43.537362 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 18 09:09:43.562309 master-0 kubenswrapper[26053]: I0318 09:09:43.562197 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 18 09:09:43.565767 master-0 kubenswrapper[26053]: I0318 09:09:43.565719 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 18 09:09:43.578233 master-0 kubenswrapper[26053]: I0318 09:09:43.578163 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 18 09:09:43.612978 master-0 kubenswrapper[26053]: I0318 09:09:43.612901 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 18 09:09:43.672619 master-0 kubenswrapper[26053]: I0318 09:09:43.672523 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 18 09:09:43.809123 master-0 kubenswrapper[26053]: I0318 09:09:43.808962 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 18 09:09:43.980461 master-0 kubenswrapper[26053]: I0318 09:09:43.980384 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 18 09:09:43.986350 master-0 kubenswrapper[26053]: I0318 09:09:43.986300 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-tnmb8" Mar 18 09:09:43.999515 master-0 kubenswrapper[26053]: I0318 09:09:43.999453 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 18 09:09:44.028780 master-0 kubenswrapper[26053]: I0318 09:09:44.028703 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 18 09:09:44.031472 master-0 kubenswrapper[26053]: I0318 09:09:44.031422 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 18 09:09:44.051551 master-0 kubenswrapper[26053]: I0318 09:09:44.051460 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 18 09:09:44.057762 master-0 kubenswrapper[26053]: I0318 09:09:44.057660 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 18 09:09:44.065296 master-0 kubenswrapper[26053]: I0318 09:09:44.065156 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 18 09:09:44.100557 master-0 kubenswrapper[26053]: I0318 09:09:44.100468 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 18 09:09:44.100893 master-0 kubenswrapper[26053]: I0318 09:09:44.100743 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-rwvl6" Mar 18 09:09:44.110619 master-0 kubenswrapper[26053]: I0318 09:09:44.110538 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 18 09:09:44.110799 master-0 kubenswrapper[26053]: I0318 09:09:44.110661 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 18 09:09:44.138858 master-0 kubenswrapper[26053]: I0318 09:09:44.138726 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 18 09:09:44.143841 master-0 kubenswrapper[26053]: I0318 09:09:44.143783 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 18 09:09:44.194221 master-0 kubenswrapper[26053]: I0318 09:09:44.194125 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 18 09:09:44.195887 master-0 kubenswrapper[26053]: I0318 09:09:44.195835 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 18 09:09:44.215085 master-0 kubenswrapper[26053]: I0318 09:09:44.215001 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 18 09:09:44.226988 master-0 kubenswrapper[26053]: I0318 09:09:44.226918 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 18 09:09:44.282968 master-0 kubenswrapper[26053]: I0318 09:09:44.282882 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 18 09:09:44.287975 master-0 kubenswrapper[26053]: I0318 09:09:44.287924 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 18 09:09:44.291395 master-0 kubenswrapper[26053]: I0318 09:09:44.291253 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 18 09:09:44.298840 master-0 kubenswrapper[26053]: I0318 09:09:44.298784 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 18 09:09:44.336306 master-0 kubenswrapper[26053]: I0318 09:09:44.336087 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 18 09:09:44.389641 master-0 kubenswrapper[26053]: I0318 09:09:44.389575 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 18 09:09:44.421458 master-0 kubenswrapper[26053]: I0318 09:09:44.421377 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 18 09:09:44.449068 master-0 kubenswrapper[26053]: I0318 09:09:44.448999 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 18 09:09:44.468779 master-0 kubenswrapper[26053]: I0318 09:09:44.468708 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-d6jf5" Mar 18 09:09:44.471810 master-0 kubenswrapper[26053]: I0318 09:09:44.471758 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 18 09:09:44.482562 master-0 kubenswrapper[26053]: I0318 09:09:44.482445 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 18 09:09:44.558988 master-0 kubenswrapper[26053]: I0318 09:09:44.558910 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-pws99" Mar 18 09:09:44.573644 master-0 kubenswrapper[26053]: I0318 09:09:44.573187 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 18 09:09:44.573981 master-0 kubenswrapper[26053]: I0318 09:09:44.573816 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 18 09:09:44.655963 master-0 kubenswrapper[26053]: I0318 09:09:44.655779 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 18 09:09:44.720808 master-0 kubenswrapper[26053]: I0318 09:09:44.720756 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 18 09:09:44.784061 master-0 kubenswrapper[26053]: I0318 09:09:44.784000 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-zlc9x" Mar 18 09:09:44.895445 master-0 kubenswrapper[26053]: I0318 09:09:44.895355 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 18 09:09:44.929493 master-0 kubenswrapper[26053]: I0318 09:09:44.929340 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-mbgdl" Mar 18 09:09:44.960404 master-0 kubenswrapper[26053]: I0318 09:09:44.960263 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 18 09:09:44.962890 master-0 kubenswrapper[26053]: I0318 09:09:44.961741 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 18 09:09:45.021094 master-0 kubenswrapper[26053]: I0318 09:09:45.021013 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-qwxs4" Mar 18 09:09:45.043733 master-0 kubenswrapper[26053]: I0318 09:09:45.043615 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 18 09:09:45.085908 master-0 kubenswrapper[26053]: I0318 09:09:45.085820 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 18 09:09:45.195326 master-0 kubenswrapper[26053]: I0318 09:09:45.195148 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-gx9ws" Mar 18 09:09:45.197086 master-0 kubenswrapper[26053]: I0318 09:09:45.197047 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 18 09:09:45.465169 master-0 kubenswrapper[26053]: I0318 09:09:45.464868 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 18 09:09:45.490419 master-0 kubenswrapper[26053]: I0318 09:09:45.490352 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 18 09:09:45.552414 master-0 kubenswrapper[26053]: I0318 09:09:45.552343 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 18 09:09:45.644649 master-0 kubenswrapper[26053]: I0318 09:09:45.644542 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 18 09:09:45.689783 master-0 kubenswrapper[26053]: I0318 09:09:45.689703 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 18 09:09:45.698720 master-0 kubenswrapper[26053]: I0318 09:09:45.698665 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-pfhv7" Mar 18 09:09:45.705686 master-0 kubenswrapper[26053]: I0318 09:09:45.705534 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-kg24z" Mar 18 09:09:45.721431 master-0 kubenswrapper[26053]: I0318 09:09:45.721292 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-nvh22" Mar 18 09:09:45.733860 master-0 kubenswrapper[26053]: I0318 09:09:45.733781 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 18 09:09:45.763021 master-0 kubenswrapper[26053]: I0318 09:09:45.762902 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 18 09:09:45.764034 master-0 kubenswrapper[26053]: I0318 09:09:45.763941 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 18 09:09:45.766410 master-0 kubenswrapper[26053]: I0318 09:09:45.766371 26053 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 18 09:09:45.785630 master-0 kubenswrapper[26053]: I0318 09:09:45.785542 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 18 09:09:45.802313 master-0 kubenswrapper[26053]: I0318 09:09:45.802240 26053 scope.go:117] "RemoveContainer" containerID="cd8f1b2378c428693218d79b09a56c9b55b51bb98be0e6bcf8f6074d75fc8fec" Mar 18 09:09:45.827496 master-0 kubenswrapper[26053]: I0318 09:09:45.827436 26053 scope.go:117] "RemoveContainer" containerID="4fc555cd68d5d190723bdb906f024eca28a915e20d6010038a593dff24a564cd" Mar 18 09:09:45.919168 master-0 kubenswrapper[26053]: I0318 09:09:45.919111 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 18 09:09:45.949195 master-0 kubenswrapper[26053]: I0318 09:09:45.949105 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 18 09:09:45.952968 master-0 kubenswrapper[26053]: I0318 09:09:45.952719 26053 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 18 09:09:45.953148 master-0 kubenswrapper[26053]: I0318 09:09:45.952978 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" containerName="startup-monitor" containerID="cri-o://3880e884b2acf4cd0065ef7577133094d8ad3c43b1e6d0deadc965b17d76216b" gracePeriod=5 Mar 18 09:09:45.954314 master-0 kubenswrapper[26053]: I0318 09:09:45.954236 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 18 09:09:45.958254 master-0 kubenswrapper[26053]: I0318 09:09:45.958174 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 18 09:09:45.995670 master-0 kubenswrapper[26053]: I0318 09:09:45.995455 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 18 09:09:46.022293 master-0 kubenswrapper[26053]: I0318 09:09:46.020952 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 18 09:09:46.031529 master-0 kubenswrapper[26053]: I0318 09:09:46.031453 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 18 09:09:46.058364 master-0 kubenswrapper[26053]: I0318 09:09:46.058272 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 18 09:09:46.087282 master-0 kubenswrapper[26053]: I0318 09:09:46.087202 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 18 09:09:46.138132 master-0 kubenswrapper[26053]: I0318 09:09:46.138063 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-s9qtf" Mar 18 09:09:46.152975 master-0 kubenswrapper[26053]: I0318 09:09:46.152893 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 18 09:09:46.220879 master-0 kubenswrapper[26053]: I0318 09:09:46.220784 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 18 09:09:46.297101 master-0 kubenswrapper[26053]: I0318 09:09:46.296980 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 18 09:09:46.315003 master-0 kubenswrapper[26053]: I0318 09:09:46.314918 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-mn6mb" Mar 18 09:09:46.340861 master-0 kubenswrapper[26053]: I0318 09:09:46.340787 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 18 09:09:46.390656 master-0 kubenswrapper[26053]: I0318 09:09:46.388164 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 18 09:09:46.429177 master-0 kubenswrapper[26053]: I0318 09:09:46.429063 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 18 09:09:46.436044 master-0 kubenswrapper[26053]: I0318 09:09:46.435949 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 18 09:09:46.487366 master-0 kubenswrapper[26053]: I0318 09:09:46.487273 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 18 09:09:46.567302 master-0 kubenswrapper[26053]: I0318 09:09:46.567173 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 18 09:09:46.642918 master-0 kubenswrapper[26053]: I0318 09:09:46.642858 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 18 09:09:46.693950 master-0 kubenswrapper[26053]: I0318 09:09:46.693863 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-hbb9q" Mar 18 09:09:46.783867 master-0 kubenswrapper[26053]: I0318 09:09:46.783751 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 18 09:09:46.787012 master-0 kubenswrapper[26053]: I0318 09:09:46.786936 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 18 09:09:46.791607 master-0 kubenswrapper[26053]: I0318 09:09:46.787893 26053 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 18 09:09:46.996236 master-0 kubenswrapper[26053]: I0318 09:09:46.996166 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 18 09:09:47.028401 master-0 kubenswrapper[26053]: I0318 09:09:47.028314 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 18 09:09:47.029525 master-0 kubenswrapper[26053]: I0318 09:09:47.029468 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 18 09:09:47.060132 master-0 kubenswrapper[26053]: I0318 09:09:47.060038 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 18 09:09:47.074191 master-0 kubenswrapper[26053]: I0318 09:09:47.074116 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 18 09:09:47.139682 master-0 kubenswrapper[26053]: I0318 09:09:47.105045 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 18 09:09:47.139682 master-0 kubenswrapper[26053]: I0318 09:09:47.125953 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 18 09:09:47.392554 master-0 kubenswrapper[26053]: I0318 09:09:47.392493 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 18 09:09:47.416283 master-0 kubenswrapper[26053]: I0318 09:09:47.416173 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 18 09:09:47.480868 master-0 kubenswrapper[26053]: I0318 09:09:47.480803 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 18 09:09:47.524021 master-0 kubenswrapper[26053]: I0318 09:09:47.523956 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 18 09:09:47.546897 master-0 kubenswrapper[26053]: I0318 09:09:47.546805 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 18 09:09:47.642219 master-0 kubenswrapper[26053]: I0318 09:09:47.642063 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 18 09:09:47.669796 master-0 kubenswrapper[26053]: I0318 09:09:47.669735 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 18 09:09:47.669796 master-0 kubenswrapper[26053]: I0318 09:09:47.669762 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 18 09:09:47.674669 master-0 kubenswrapper[26053]: I0318 09:09:47.674625 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 18 09:09:47.867879 master-0 kubenswrapper[26053]: I0318 09:09:47.867793 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 18 09:09:47.966332 master-0 kubenswrapper[26053]: I0318 09:09:47.966272 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 18 09:09:48.029989 master-0 kubenswrapper[26053]: I0318 09:09:48.029907 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-nxx2s" Mar 18 09:09:48.096819 master-0 kubenswrapper[26053]: I0318 09:09:48.096723 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 18 09:09:48.219398 master-0 kubenswrapper[26053]: I0318 09:09:48.207855 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 18 09:09:48.242433 master-0 kubenswrapper[26053]: I0318 09:09:48.242336 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 18 09:09:48.265279 master-0 kubenswrapper[26053]: I0318 09:09:48.263833 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 18 09:09:48.300589 master-0 kubenswrapper[26053]: I0318 09:09:48.300495 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 18 09:09:48.326319 master-0 kubenswrapper[26053]: I0318 09:09:48.326264 26053 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 18 09:09:48.366151 master-0 kubenswrapper[26053]: I0318 09:09:48.366015 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 18 09:09:48.383159 master-0 kubenswrapper[26053]: I0318 09:09:48.383092 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 18 09:09:48.397781 master-0 kubenswrapper[26053]: I0318 09:09:48.397730 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 18 09:09:48.458077 master-0 kubenswrapper[26053]: I0318 09:09:48.458024 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 18 09:09:48.574750 master-0 kubenswrapper[26053]: I0318 09:09:48.574611 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 18 09:09:48.609842 master-0 kubenswrapper[26053]: I0318 09:09:48.609763 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 18 09:09:48.698079 master-0 kubenswrapper[26053]: I0318 09:09:48.698010 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 18 09:09:48.782327 master-0 kubenswrapper[26053]: I0318 09:09:48.782281 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 18 09:09:48.813406 master-0 kubenswrapper[26053]: I0318 09:09:48.813351 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 18 09:09:48.931043 master-0 kubenswrapper[26053]: I0318 09:09:48.930785 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 18 09:09:48.939959 master-0 kubenswrapper[26053]: I0318 09:09:48.939931 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 18 09:09:49.017477 master-0 kubenswrapper[26053]: I0318 09:09:49.017410 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 18 09:09:49.182409 master-0 kubenswrapper[26053]: I0318 09:09:49.182229 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-8zgz4" Mar 18 09:09:49.242100 master-0 kubenswrapper[26053]: I0318 09:09:49.242039 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 18 09:09:49.391300 master-0 kubenswrapper[26053]: I0318 09:09:49.391260 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 18 09:09:49.461273 master-0 kubenswrapper[26053]: I0318 09:09:49.461212 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 18 09:09:49.564293 master-0 kubenswrapper[26053]: I0318 09:09:49.564197 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-lgw5q" Mar 18 09:09:49.599597 master-0 kubenswrapper[26053]: I0318 09:09:49.599505 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 18 09:09:49.625776 master-0 kubenswrapper[26053]: I0318 09:09:49.625689 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-m2754" Mar 18 09:09:49.652441 master-0 kubenswrapper[26053]: I0318 09:09:49.652348 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 18 09:09:49.712427 master-0 kubenswrapper[26053]: I0318 09:09:49.712260 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 18 09:09:49.914809 master-0 kubenswrapper[26053]: I0318 09:09:49.914707 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 18 09:09:49.920731 master-0 kubenswrapper[26053]: I0318 09:09:49.920680 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 18 09:09:49.944640 master-0 kubenswrapper[26053]: I0318 09:09:49.944520 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 18 09:09:50.001643 master-0 kubenswrapper[26053]: I0318 09:09:50.001334 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 18 09:09:50.089961 master-0 kubenswrapper[26053]: I0318 09:09:50.089882 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 18 09:09:50.262149 master-0 kubenswrapper[26053]: I0318 09:09:50.262004 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 18 09:09:50.311772 master-0 kubenswrapper[26053]: I0318 09:09:50.311707 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 18 09:09:50.339229 master-0 kubenswrapper[26053]: I0318 09:09:50.339170 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 18 09:09:50.376488 master-0 kubenswrapper[26053]: I0318 09:09:50.376428 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 18 09:09:50.421026 master-0 kubenswrapper[26053]: I0318 09:09:50.420918 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 18 09:09:50.508925 master-0 kubenswrapper[26053]: I0318 09:09:50.508835 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 18 09:09:50.734625 master-0 kubenswrapper[26053]: I0318 09:09:50.734502 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 18 09:09:51.118192 master-0 kubenswrapper[26053]: I0318 09:09:51.118115 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 18 09:09:51.150553 master-0 kubenswrapper[26053]: I0318 09:09:51.150475 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 18 09:09:51.275979 master-0 kubenswrapper[26053]: I0318 09:09:51.275830 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 18 09:09:51.278401 master-0 kubenswrapper[26053]: I0318 09:09:51.278357 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 18 09:09:51.450622 master-0 kubenswrapper[26053]: I0318 09:09:51.450410 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 18 09:09:51.522224 master-0 kubenswrapper[26053]: I0318 09:09:51.522130 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 18 09:09:51.579796 master-0 kubenswrapper[26053]: I0318 09:09:51.579744 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_ebbfbf2b56df0323ba118d68bfdad8b9/startup-monitor/0.log" Mar 18 09:09:51.579796 master-0 kubenswrapper[26053]: I0318 09:09:51.579813 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:09:51.620267 master-0 kubenswrapper[26053]: I0318 09:09:51.620195 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 18 09:09:51.620267 master-0 kubenswrapper[26053]: I0318 09:09:51.620264 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 18 09:09:51.620267 master-0 kubenswrapper[26053]: I0318 09:09:51.620272 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests" (OuterVolumeSpecName: "manifests") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:09:51.620920 master-0 kubenswrapper[26053]: I0318 09:09:51.620313 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 18 09:09:51.620920 master-0 kubenswrapper[26053]: I0318 09:09:51.620339 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 18 09:09:51.620920 master-0 kubenswrapper[26053]: I0318 09:09:51.620373 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") pod \"ebbfbf2b56df0323ba118d68bfdad8b9\" (UID: \"ebbfbf2b56df0323ba118d68bfdad8b9\") " Mar 18 09:09:51.620920 master-0 kubenswrapper[26053]: I0318 09:09:51.620406 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log" (OuterVolumeSpecName: "var-log") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:09:51.620920 master-0 kubenswrapper[26053]: I0318 09:09:51.620526 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:09:51.620920 master-0 kubenswrapper[26053]: I0318 09:09:51.620619 26053 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-manifests\") on node \"master-0\" DevicePath \"\"" Mar 18 09:09:51.620920 master-0 kubenswrapper[26053]: I0318 09:09:51.620817 26053 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-log\") on node \"master-0\" DevicePath \"\"" Mar 18 09:09:51.620920 master-0 kubenswrapper[26053]: I0318 09:09:51.620728 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock" (OuterVolumeSpecName: "var-lock") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:09:51.625449 master-0 kubenswrapper[26053]: I0318 09:09:51.625403 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "ebbfbf2b56df0323ba118d68bfdad8b9" (UID: "ebbfbf2b56df0323ba118d68bfdad8b9"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:09:51.645991 master-0 kubenswrapper[26053]: I0318 09:09:51.645918 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 18 09:09:51.729258 master-0 kubenswrapper[26053]: I0318 09:09:51.721307 26053 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:09:51.729258 master-0 kubenswrapper[26053]: I0318 09:09:51.721351 26053 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:09:51.729258 master-0 kubenswrapper[26053]: I0318 09:09:51.721362 26053 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/ebbfbf2b56df0323ba118d68bfdad8b9-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:09:51.739711 master-0 kubenswrapper[26053]: I0318 09:09:51.739672 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_ebbfbf2b56df0323ba118d68bfdad8b9/startup-monitor/0.log" Mar 18 09:09:51.740187 master-0 kubenswrapper[26053]: I0318 09:09:51.739727 26053 generic.go:334] "Generic (PLEG): container finished" podID="ebbfbf2b56df0323ba118d68bfdad8b9" containerID="3880e884b2acf4cd0065ef7577133094d8ad3c43b1e6d0deadc965b17d76216b" exitCode=137 Mar 18 09:09:51.740187 master-0 kubenswrapper[26053]: I0318 09:09:51.739779 26053 scope.go:117] "RemoveContainer" containerID="3880e884b2acf4cd0065ef7577133094d8ad3c43b1e6d0deadc965b17d76216b" Mar 18 09:09:51.740187 master-0 kubenswrapper[26053]: I0318 09:09:51.739922 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 18 09:09:51.761249 master-0 kubenswrapper[26053]: I0318 09:09:51.761199 26053 scope.go:117] "RemoveContainer" containerID="3880e884b2acf4cd0065ef7577133094d8ad3c43b1e6d0deadc965b17d76216b" Mar 18 09:09:51.761754 master-0 kubenswrapper[26053]: E0318 09:09:51.761705 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3880e884b2acf4cd0065ef7577133094d8ad3c43b1e6d0deadc965b17d76216b\": container with ID starting with 3880e884b2acf4cd0065ef7577133094d8ad3c43b1e6d0deadc965b17d76216b not found: ID does not exist" containerID="3880e884b2acf4cd0065ef7577133094d8ad3c43b1e6d0deadc965b17d76216b" Mar 18 09:09:51.761846 master-0 kubenswrapper[26053]: I0318 09:09:51.761752 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3880e884b2acf4cd0065ef7577133094d8ad3c43b1e6d0deadc965b17d76216b"} err="failed to get container status \"3880e884b2acf4cd0065ef7577133094d8ad3c43b1e6d0deadc965b17d76216b\": rpc error: code = NotFound desc = could not find container \"3880e884b2acf4cd0065ef7577133094d8ad3c43b1e6d0deadc965b17d76216b\": container with ID starting with 3880e884b2acf4cd0065ef7577133094d8ad3c43b1e6d0deadc965b17d76216b not found: ID does not exist" Mar 18 09:09:51.776975 master-0 kubenswrapper[26053]: I0318 09:09:51.776904 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 18 09:09:52.197069 master-0 kubenswrapper[26053]: I0318 09:09:52.196461 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 18 09:09:52.600294 master-0 kubenswrapper[26053]: I0318 09:09:52.600163 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:09:52.642670 master-0 kubenswrapper[26053]: I0318 09:09:52.642537 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:09:52.745355 master-0 kubenswrapper[26053]: I0318 09:09:52.745259 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" path="/var/lib/kubelet/pods/ebbfbf2b56df0323ba118d68bfdad8b9/volumes" Mar 18 09:09:52.804774 master-0 kubenswrapper[26053]: I0318 09:09:52.804691 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 18 09:09:53.177418 master-0 kubenswrapper[26053]: I0318 09:09:53.177314 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 18 09:10:01.568520 master-0 kubenswrapper[26053]: I0318 09:10:01.568379 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5b85d959c9-8jjlz"] Mar 18 09:10:01.569363 master-0 kubenswrapper[26053]: E0318 09:10:01.568700 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" containerName="startup-monitor" Mar 18 09:10:01.569363 master-0 kubenswrapper[26053]: I0318 09:10:01.568720 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" containerName="startup-monitor" Mar 18 09:10:01.569363 master-0 kubenswrapper[26053]: E0318 09:10:01.568730 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1723c159-3187-46be-89bb-a529ca0c54db" containerName="installer" Mar 18 09:10:01.569363 master-0 kubenswrapper[26053]: I0318 09:10:01.568738 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="1723c159-3187-46be-89bb-a529ca0c54db" containerName="installer" Mar 18 09:10:01.569363 master-0 kubenswrapper[26053]: I0318 09:10:01.568933 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebbfbf2b56df0323ba118d68bfdad8b9" containerName="startup-monitor" Mar 18 09:10:01.569363 master-0 kubenswrapper[26053]: I0318 09:10:01.568956 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="1723c159-3187-46be-89bb-a529ca0c54db" containerName="installer" Mar 18 09:10:01.569671 master-0 kubenswrapper[26053]: I0318 09:10:01.569438 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:01.590390 master-0 kubenswrapper[26053]: I0318 09:10:01.590330 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b85d959c9-8jjlz"] Mar 18 09:10:01.633704 master-0 kubenswrapper[26053]: I0318 09:10:01.633636 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-console-serving-cert\") pod \"console-5b85d959c9-8jjlz\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:01.633908 master-0 kubenswrapper[26053]: I0318 09:10:01.633767 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-console-oauth-config\") pod \"console-5b85d959c9-8jjlz\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:01.633908 master-0 kubenswrapper[26053]: I0318 09:10:01.633801 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-trusted-ca-bundle\") pod \"console-5b85d959c9-8jjlz\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:01.633908 master-0 kubenswrapper[26053]: I0318 09:10:01.633829 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j86wt\" (UniqueName: \"kubernetes.io/projected/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-kube-api-access-j86wt\") pod \"console-5b85d959c9-8jjlz\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:01.633908 master-0 kubenswrapper[26053]: I0318 09:10:01.633861 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-service-ca\") pod \"console-5b85d959c9-8jjlz\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:01.634044 master-0 kubenswrapper[26053]: I0318 09:10:01.633979 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-console-config\") pod \"console-5b85d959c9-8jjlz\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:01.634044 master-0 kubenswrapper[26053]: I0318 09:10:01.634023 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-oauth-serving-cert\") pod \"console-5b85d959c9-8jjlz\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:01.735551 master-0 kubenswrapper[26053]: I0318 09:10:01.735506 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-console-serving-cert\") pod \"console-5b85d959c9-8jjlz\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:01.735762 master-0 kubenswrapper[26053]: I0318 09:10:01.735600 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-console-oauth-config\") pod \"console-5b85d959c9-8jjlz\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:01.735762 master-0 kubenswrapper[26053]: I0318 09:10:01.735632 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-trusted-ca-bundle\") pod \"console-5b85d959c9-8jjlz\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:01.735762 master-0 kubenswrapper[26053]: I0318 09:10:01.735672 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j86wt\" (UniqueName: \"kubernetes.io/projected/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-kube-api-access-j86wt\") pod \"console-5b85d959c9-8jjlz\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:01.735762 master-0 kubenswrapper[26053]: I0318 09:10:01.735699 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-service-ca\") pod \"console-5b85d959c9-8jjlz\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:01.735762 master-0 kubenswrapper[26053]: I0318 09:10:01.735746 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-console-config\") pod \"console-5b85d959c9-8jjlz\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:01.735984 master-0 kubenswrapper[26053]: I0318 09:10:01.735769 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-oauth-serving-cert\") pod \"console-5b85d959c9-8jjlz\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:01.736990 master-0 kubenswrapper[26053]: I0318 09:10:01.736951 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-oauth-serving-cert\") pod \"console-5b85d959c9-8jjlz\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:01.737237 master-0 kubenswrapper[26053]: I0318 09:10:01.737167 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-console-config\") pod \"console-5b85d959c9-8jjlz\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:01.737237 master-0 kubenswrapper[26053]: I0318 09:10:01.737208 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-service-ca\") pod \"console-5b85d959c9-8jjlz\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:01.737536 master-0 kubenswrapper[26053]: I0318 09:10:01.737508 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-trusted-ca-bundle\") pod \"console-5b85d959c9-8jjlz\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:01.740574 master-0 kubenswrapper[26053]: I0318 09:10:01.740521 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-console-serving-cert\") pod \"console-5b85d959c9-8jjlz\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:01.740709 master-0 kubenswrapper[26053]: I0318 09:10:01.740658 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-console-oauth-config\") pod \"console-5b85d959c9-8jjlz\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:01.764786 master-0 kubenswrapper[26053]: I0318 09:10:01.764741 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j86wt\" (UniqueName: \"kubernetes.io/projected/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-kube-api-access-j86wt\") pod \"console-5b85d959c9-8jjlz\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:01.889320 master-0 kubenswrapper[26053]: I0318 09:10:01.889197 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:02.327697 master-0 kubenswrapper[26053]: I0318 09:10:02.327636 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5b85d959c9-8jjlz"] Mar 18 09:10:02.333689 master-0 kubenswrapper[26053]: W0318 09:10:02.333622 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba86cadf_6b4a_4e54_a0ee_c410b4965f7e.slice/crio-e759d50c748b8462befb6b7cb167fdbffe5ef7b304563963661333ab3a016e51 WatchSource:0}: Error finding container e759d50c748b8462befb6b7cb167fdbffe5ef7b304563963661333ab3a016e51: Status 404 returned error can't find the container with id e759d50c748b8462befb6b7cb167fdbffe5ef7b304563963661333ab3a016e51 Mar 18 09:10:02.850660 master-0 kubenswrapper[26053]: I0318 09:10:02.850487 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b85d959c9-8jjlz" event={"ID":"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e","Type":"ContainerStarted","Data":"b3e9685326276c949bb74ec130c654594ecc555b662f5c7c7729208615892e4f"} Mar 18 09:10:02.850660 master-0 kubenswrapper[26053]: I0318 09:10:02.850600 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b85d959c9-8jjlz" event={"ID":"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e","Type":"ContainerStarted","Data":"e759d50c748b8462befb6b7cb167fdbffe5ef7b304563963661333ab3a016e51"} Mar 18 09:10:02.881274 master-0 kubenswrapper[26053]: I0318 09:10:02.881177 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5b85d959c9-8jjlz" podStartSLOduration=1.881149303 podStartE2EDuration="1.881149303s" podCreationTimestamp="2026-03-18 09:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:10:02.87980362 +0000 UTC m=+390.373155001" watchObservedRunningTime="2026-03-18 09:10:02.881149303 +0000 UTC m=+390.374500694" Mar 18 09:10:07.002175 master-0 kubenswrapper[26053]: I0318 09:10:07.002048 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5d9cb85584-jfkbk" podUID="09e381d6-17ca-4df3-a45f-22b95a1dc12f" containerName="console" containerID="cri-o://155d62001c4b4a0a9cdbb3b6706bb412d1fe11db184da9574d84c776892e9090" gracePeriod=15 Mar 18 09:10:07.512372 master-0 kubenswrapper[26053]: I0318 09:10:07.512276 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d9cb85584-jfkbk_09e381d6-17ca-4df3-a45f-22b95a1dc12f/console/0.log" Mar 18 09:10:07.512372 master-0 kubenswrapper[26053]: I0318 09:10:07.512373 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:10:07.536299 master-0 kubenswrapper[26053]: I0318 09:10:07.535895 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/09e381d6-17ca-4df3-a45f-22b95a1dc12f-oauth-serving-cert\") pod \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " Mar 18 09:10:07.536299 master-0 kubenswrapper[26053]: I0318 09:10:07.535991 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/09e381d6-17ca-4df3-a45f-22b95a1dc12f-service-ca\") pod \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " Mar 18 09:10:07.536299 master-0 kubenswrapper[26053]: I0318 09:10:07.536034 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09e381d6-17ca-4df3-a45f-22b95a1dc12f-trusted-ca-bundle\") pod \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " Mar 18 09:10:07.536299 master-0 kubenswrapper[26053]: I0318 09:10:07.536073 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/09e381d6-17ca-4df3-a45f-22b95a1dc12f-console-serving-cert\") pod \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " Mar 18 09:10:07.536299 master-0 kubenswrapper[26053]: I0318 09:10:07.536100 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/09e381d6-17ca-4df3-a45f-22b95a1dc12f-console-oauth-config\") pod \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " Mar 18 09:10:07.536299 master-0 kubenswrapper[26053]: I0318 09:10:07.536215 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h66qm\" (UniqueName: \"kubernetes.io/projected/09e381d6-17ca-4df3-a45f-22b95a1dc12f-kube-api-access-h66qm\") pod \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " Mar 18 09:10:07.536299 master-0 kubenswrapper[26053]: I0318 09:10:07.536264 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/09e381d6-17ca-4df3-a45f-22b95a1dc12f-console-config\") pod \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\" (UID: \"09e381d6-17ca-4df3-a45f-22b95a1dc12f\") " Mar 18 09:10:07.537956 master-0 kubenswrapper[26053]: I0318 09:10:07.537496 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09e381d6-17ca-4df3-a45f-22b95a1dc12f-console-config" (OuterVolumeSpecName: "console-config") pod "09e381d6-17ca-4df3-a45f-22b95a1dc12f" (UID: "09e381d6-17ca-4df3-a45f-22b95a1dc12f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:10:07.538143 master-0 kubenswrapper[26053]: I0318 09:10:07.538057 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09e381d6-17ca-4df3-a45f-22b95a1dc12f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09e381d6-17ca-4df3-a45f-22b95a1dc12f" (UID: "09e381d6-17ca-4df3-a45f-22b95a1dc12f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:10:07.541612 master-0 kubenswrapper[26053]: I0318 09:10:07.538981 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09e381d6-17ca-4df3-a45f-22b95a1dc12f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "09e381d6-17ca-4df3-a45f-22b95a1dc12f" (UID: "09e381d6-17ca-4df3-a45f-22b95a1dc12f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:10:07.541612 master-0 kubenswrapper[26053]: I0318 09:10:07.539192 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09e381d6-17ca-4df3-a45f-22b95a1dc12f-service-ca" (OuterVolumeSpecName: "service-ca") pod "09e381d6-17ca-4df3-a45f-22b95a1dc12f" (UID: "09e381d6-17ca-4df3-a45f-22b95a1dc12f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:10:07.541934 master-0 kubenswrapper[26053]: I0318 09:10:07.541719 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09e381d6-17ca-4df3-a45f-22b95a1dc12f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "09e381d6-17ca-4df3-a45f-22b95a1dc12f" (UID: "09e381d6-17ca-4df3-a45f-22b95a1dc12f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:10:07.542038 master-0 kubenswrapper[26053]: I0318 09:10:07.541934 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09e381d6-17ca-4df3-a45f-22b95a1dc12f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "09e381d6-17ca-4df3-a45f-22b95a1dc12f" (UID: "09e381d6-17ca-4df3-a45f-22b95a1dc12f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:10:07.542785 master-0 kubenswrapper[26053]: I0318 09:10:07.542751 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09e381d6-17ca-4df3-a45f-22b95a1dc12f-kube-api-access-h66qm" (OuterVolumeSpecName: "kube-api-access-h66qm") pod "09e381d6-17ca-4df3-a45f-22b95a1dc12f" (UID: "09e381d6-17ca-4df3-a45f-22b95a1dc12f"). InnerVolumeSpecName "kube-api-access-h66qm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:10:07.638195 master-0 kubenswrapper[26053]: I0318 09:10:07.638124 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h66qm\" (UniqueName: \"kubernetes.io/projected/09e381d6-17ca-4df3-a45f-22b95a1dc12f-kube-api-access-h66qm\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:07.638195 master-0 kubenswrapper[26053]: I0318 09:10:07.638187 26053 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/09e381d6-17ca-4df3-a45f-22b95a1dc12f-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:07.638195 master-0 kubenswrapper[26053]: I0318 09:10:07.638207 26053 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/09e381d6-17ca-4df3-a45f-22b95a1dc12f-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:07.638672 master-0 kubenswrapper[26053]: I0318 09:10:07.638226 26053 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/09e381d6-17ca-4df3-a45f-22b95a1dc12f-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:07.638672 master-0 kubenswrapper[26053]: I0318 09:10:07.638249 26053 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09e381d6-17ca-4df3-a45f-22b95a1dc12f-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:07.638672 master-0 kubenswrapper[26053]: I0318 09:10:07.638268 26053 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/09e381d6-17ca-4df3-a45f-22b95a1dc12f-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:07.638672 master-0 kubenswrapper[26053]: I0318 09:10:07.638286 26053 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/09e381d6-17ca-4df3-a45f-22b95a1dc12f-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:07.894715 master-0 kubenswrapper[26053]: I0318 09:10:07.894662 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d9cb85584-jfkbk_09e381d6-17ca-4df3-a45f-22b95a1dc12f/console/0.log" Mar 18 09:10:07.894715 master-0 kubenswrapper[26053]: I0318 09:10:07.894711 26053 generic.go:334] "Generic (PLEG): container finished" podID="09e381d6-17ca-4df3-a45f-22b95a1dc12f" containerID="155d62001c4b4a0a9cdbb3b6706bb412d1fe11db184da9574d84c776892e9090" exitCode=2 Mar 18 09:10:07.895025 master-0 kubenswrapper[26053]: I0318 09:10:07.894742 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9cb85584-jfkbk" event={"ID":"09e381d6-17ca-4df3-a45f-22b95a1dc12f","Type":"ContainerDied","Data":"155d62001c4b4a0a9cdbb3b6706bb412d1fe11db184da9574d84c776892e9090"} Mar 18 09:10:07.895025 master-0 kubenswrapper[26053]: I0318 09:10:07.894775 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d9cb85584-jfkbk" event={"ID":"09e381d6-17ca-4df3-a45f-22b95a1dc12f","Type":"ContainerDied","Data":"6913c62288658645d8511a0bfad2d1c705dff63bb6ff0460e2744da85cf4ca17"} Mar 18 09:10:07.895025 master-0 kubenswrapper[26053]: I0318 09:10:07.894791 26053 scope.go:117] "RemoveContainer" containerID="155d62001c4b4a0a9cdbb3b6706bb412d1fe11db184da9574d84c776892e9090" Mar 18 09:10:07.895025 master-0 kubenswrapper[26053]: I0318 09:10:07.894883 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d9cb85584-jfkbk" Mar 18 09:10:07.920104 master-0 kubenswrapper[26053]: I0318 09:10:07.920050 26053 scope.go:117] "RemoveContainer" containerID="155d62001c4b4a0a9cdbb3b6706bb412d1fe11db184da9574d84c776892e9090" Mar 18 09:10:07.920905 master-0 kubenswrapper[26053]: E0318 09:10:07.920847 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"155d62001c4b4a0a9cdbb3b6706bb412d1fe11db184da9574d84c776892e9090\": container with ID starting with 155d62001c4b4a0a9cdbb3b6706bb412d1fe11db184da9574d84c776892e9090 not found: ID does not exist" containerID="155d62001c4b4a0a9cdbb3b6706bb412d1fe11db184da9574d84c776892e9090" Mar 18 09:10:07.921013 master-0 kubenswrapper[26053]: I0318 09:10:07.920904 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"155d62001c4b4a0a9cdbb3b6706bb412d1fe11db184da9574d84c776892e9090"} err="failed to get container status \"155d62001c4b4a0a9cdbb3b6706bb412d1fe11db184da9574d84c776892e9090\": rpc error: code = NotFound desc = could not find container \"155d62001c4b4a0a9cdbb3b6706bb412d1fe11db184da9574d84c776892e9090\": container with ID starting with 155d62001c4b4a0a9cdbb3b6706bb412d1fe11db184da9574d84c776892e9090 not found: ID does not exist" Mar 18 09:10:07.932382 master-0 kubenswrapper[26053]: I0318 09:10:07.932293 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d9cb85584-jfkbk"] Mar 18 09:10:07.937818 master-0 kubenswrapper[26053]: I0318 09:10:07.937744 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5d9cb85584-jfkbk"] Mar 18 09:10:08.746253 master-0 kubenswrapper[26053]: I0318 09:10:08.746172 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09e381d6-17ca-4df3-a45f-22b95a1dc12f" path="/var/lib/kubelet/pods/09e381d6-17ca-4df3-a45f-22b95a1dc12f/volumes" Mar 18 09:10:11.890064 master-0 kubenswrapper[26053]: I0318 09:10:11.890025 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:11.890638 master-0 kubenswrapper[26053]: I0318 09:10:11.890618 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:11.897120 master-0 kubenswrapper[26053]: I0318 09:10:11.897096 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:11.949893 master-0 kubenswrapper[26053]: I0318 09:10:11.949831 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:10:12.030651 master-0 kubenswrapper[26053]: I0318 09:10:12.030313 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-64b4885569-gmdjt"] Mar 18 09:10:37.076448 master-0 kubenswrapper[26053]: I0318 09:10:37.076334 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-64b4885569-gmdjt" podUID="50e64936-f20b-4d5a-99ec-3264186272a3" containerName="console" containerID="cri-o://b0df16b442ff3f862a230f5e9f016abf9d900d19f60e12be3cb367e5df563d8f" gracePeriod=15 Mar 18 09:10:37.622028 master-0 kubenswrapper[26053]: I0318 09:10:37.621953 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-64b4885569-gmdjt_50e64936-f20b-4d5a-99ec-3264186272a3/console/0.log" Mar 18 09:10:37.622028 master-0 kubenswrapper[26053]: I0318 09:10:37.622037 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:10:37.741217 master-0 kubenswrapper[26053]: I0318 09:10:37.741153 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/50e64936-f20b-4d5a-99ec-3264186272a3-console-serving-cert\") pod \"50e64936-f20b-4d5a-99ec-3264186272a3\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " Mar 18 09:10:37.741217 master-0 kubenswrapper[26053]: I0318 09:10:37.741210 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/50e64936-f20b-4d5a-99ec-3264186272a3-console-oauth-config\") pod \"50e64936-f20b-4d5a-99ec-3264186272a3\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " Mar 18 09:10:37.741466 master-0 kubenswrapper[26053]: I0318 09:10:37.741267 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmksc\" (UniqueName: \"kubernetes.io/projected/50e64936-f20b-4d5a-99ec-3264186272a3-kube-api-access-bmksc\") pod \"50e64936-f20b-4d5a-99ec-3264186272a3\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " Mar 18 09:10:37.741466 master-0 kubenswrapper[26053]: I0318 09:10:37.741299 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/50e64936-f20b-4d5a-99ec-3264186272a3-console-config\") pod \"50e64936-f20b-4d5a-99ec-3264186272a3\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " Mar 18 09:10:37.741466 master-0 kubenswrapper[26053]: I0318 09:10:37.741327 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50e64936-f20b-4d5a-99ec-3264186272a3-trusted-ca-bundle\") pod \"50e64936-f20b-4d5a-99ec-3264186272a3\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " Mar 18 09:10:37.741466 master-0 kubenswrapper[26053]: I0318 09:10:37.741401 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/50e64936-f20b-4d5a-99ec-3264186272a3-oauth-serving-cert\") pod \"50e64936-f20b-4d5a-99ec-3264186272a3\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " Mar 18 09:10:37.741466 master-0 kubenswrapper[26053]: I0318 09:10:37.741467 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/50e64936-f20b-4d5a-99ec-3264186272a3-service-ca\") pod \"50e64936-f20b-4d5a-99ec-3264186272a3\" (UID: \"50e64936-f20b-4d5a-99ec-3264186272a3\") " Mar 18 09:10:37.742111 master-0 kubenswrapper[26053]: I0318 09:10:37.742059 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50e64936-f20b-4d5a-99ec-3264186272a3-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "50e64936-f20b-4d5a-99ec-3264186272a3" (UID: "50e64936-f20b-4d5a-99ec-3264186272a3"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:10:37.742167 master-0 kubenswrapper[26053]: I0318 09:10:37.742118 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50e64936-f20b-4d5a-99ec-3264186272a3-console-config" (OuterVolumeSpecName: "console-config") pod "50e64936-f20b-4d5a-99ec-3264186272a3" (UID: "50e64936-f20b-4d5a-99ec-3264186272a3"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:10:37.742243 master-0 kubenswrapper[26053]: I0318 09:10:37.742200 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50e64936-f20b-4d5a-99ec-3264186272a3-service-ca" (OuterVolumeSpecName: "service-ca") pod "50e64936-f20b-4d5a-99ec-3264186272a3" (UID: "50e64936-f20b-4d5a-99ec-3264186272a3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:10:37.742398 master-0 kubenswrapper[26053]: I0318 09:10:37.742319 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50e64936-f20b-4d5a-99ec-3264186272a3-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "50e64936-f20b-4d5a-99ec-3264186272a3" (UID: "50e64936-f20b-4d5a-99ec-3264186272a3"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:10:37.742673 master-0 kubenswrapper[26053]: I0318 09:10:37.742638 26053 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/50e64936-f20b-4d5a-99ec-3264186272a3-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:37.742673 master-0 kubenswrapper[26053]: I0318 09:10:37.742666 26053 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/50e64936-f20b-4d5a-99ec-3264186272a3-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:37.742781 master-0 kubenswrapper[26053]: I0318 09:10:37.742682 26053 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50e64936-f20b-4d5a-99ec-3264186272a3-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:37.742781 master-0 kubenswrapper[26053]: I0318 09:10:37.742700 26053 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/50e64936-f20b-4d5a-99ec-3264186272a3-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:37.744212 master-0 kubenswrapper[26053]: I0318 09:10:37.744163 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50e64936-f20b-4d5a-99ec-3264186272a3-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "50e64936-f20b-4d5a-99ec-3264186272a3" (UID: "50e64936-f20b-4d5a-99ec-3264186272a3"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:10:37.744545 master-0 kubenswrapper[26053]: I0318 09:10:37.744499 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50e64936-f20b-4d5a-99ec-3264186272a3-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "50e64936-f20b-4d5a-99ec-3264186272a3" (UID: "50e64936-f20b-4d5a-99ec-3264186272a3"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:10:37.744678 master-0 kubenswrapper[26053]: I0318 09:10:37.744641 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50e64936-f20b-4d5a-99ec-3264186272a3-kube-api-access-bmksc" (OuterVolumeSpecName: "kube-api-access-bmksc") pod "50e64936-f20b-4d5a-99ec-3264186272a3" (UID: "50e64936-f20b-4d5a-99ec-3264186272a3"). InnerVolumeSpecName "kube-api-access-bmksc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:10:37.844676 master-0 kubenswrapper[26053]: I0318 09:10:37.844509 26053 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/50e64936-f20b-4d5a-99ec-3264186272a3-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:37.844676 master-0 kubenswrapper[26053]: I0318 09:10:37.844622 26053 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/50e64936-f20b-4d5a-99ec-3264186272a3-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:37.844676 master-0 kubenswrapper[26053]: I0318 09:10:37.844645 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmksc\" (UniqueName: \"kubernetes.io/projected/50e64936-f20b-4d5a-99ec-3264186272a3-kube-api-access-bmksc\") on node \"master-0\" DevicePath \"\"" Mar 18 09:10:38.197799 master-0 kubenswrapper[26053]: I0318 09:10:38.197585 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-64b4885569-gmdjt_50e64936-f20b-4d5a-99ec-3264186272a3/console/0.log" Mar 18 09:10:38.197799 master-0 kubenswrapper[26053]: I0318 09:10:38.197754 26053 generic.go:334] "Generic (PLEG): container finished" podID="50e64936-f20b-4d5a-99ec-3264186272a3" containerID="b0df16b442ff3f862a230f5e9f016abf9d900d19f60e12be3cb367e5df563d8f" exitCode=2 Mar 18 09:10:38.198911 master-0 kubenswrapper[26053]: I0318 09:10:38.197806 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64b4885569-gmdjt" event={"ID":"50e64936-f20b-4d5a-99ec-3264186272a3","Type":"ContainerDied","Data":"b0df16b442ff3f862a230f5e9f016abf9d900d19f60e12be3cb367e5df563d8f"} Mar 18 09:10:38.198911 master-0 kubenswrapper[26053]: I0318 09:10:38.197828 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64b4885569-gmdjt" Mar 18 09:10:38.198911 master-0 kubenswrapper[26053]: I0318 09:10:38.197849 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64b4885569-gmdjt" event={"ID":"50e64936-f20b-4d5a-99ec-3264186272a3","Type":"ContainerDied","Data":"6ce514659ffc94260379b15fb6afda36dd77368c6747d6cfefea08c466ded85d"} Mar 18 09:10:38.198911 master-0 kubenswrapper[26053]: I0318 09:10:38.197880 26053 scope.go:117] "RemoveContainer" containerID="b0df16b442ff3f862a230f5e9f016abf9d900d19f60e12be3cb367e5df563d8f" Mar 18 09:10:38.226236 master-0 kubenswrapper[26053]: I0318 09:10:38.226167 26053 scope.go:117] "RemoveContainer" containerID="b0df16b442ff3f862a230f5e9f016abf9d900d19f60e12be3cb367e5df563d8f" Mar 18 09:10:38.226988 master-0 kubenswrapper[26053]: E0318 09:10:38.226904 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0df16b442ff3f862a230f5e9f016abf9d900d19f60e12be3cb367e5df563d8f\": container with ID starting with b0df16b442ff3f862a230f5e9f016abf9d900d19f60e12be3cb367e5df563d8f not found: ID does not exist" containerID="b0df16b442ff3f862a230f5e9f016abf9d900d19f60e12be3cb367e5df563d8f" Mar 18 09:10:38.227155 master-0 kubenswrapper[26053]: I0318 09:10:38.226984 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0df16b442ff3f862a230f5e9f016abf9d900d19f60e12be3cb367e5df563d8f"} err="failed to get container status \"b0df16b442ff3f862a230f5e9f016abf9d900d19f60e12be3cb367e5df563d8f\": rpc error: code = NotFound desc = could not find container \"b0df16b442ff3f862a230f5e9f016abf9d900d19f60e12be3cb367e5df563d8f\": container with ID starting with b0df16b442ff3f862a230f5e9f016abf9d900d19f60e12be3cb367e5df563d8f not found: ID does not exist" Mar 18 09:10:38.256765 master-0 kubenswrapper[26053]: I0318 09:10:38.256719 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-64b4885569-gmdjt"] Mar 18 09:10:38.266914 master-0 kubenswrapper[26053]: I0318 09:10:38.266838 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-64b4885569-gmdjt"] Mar 18 09:10:38.745251 master-0 kubenswrapper[26053]: I0318 09:10:38.745162 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50e64936-f20b-4d5a-99ec-3264186272a3" path="/var/lib/kubelet/pods/50e64936-f20b-4d5a-99ec-3264186272a3/volumes" Mar 18 09:10:54.950207 master-0 kubenswrapper[26053]: I0318 09:10:54.950121 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-x2ftq"] Mar 18 09:10:54.951043 master-0 kubenswrapper[26053]: E0318 09:10:54.950748 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09e381d6-17ca-4df3-a45f-22b95a1dc12f" containerName="console" Mar 18 09:10:54.951043 master-0 kubenswrapper[26053]: I0318 09:10:54.950787 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="09e381d6-17ca-4df3-a45f-22b95a1dc12f" containerName="console" Mar 18 09:10:54.951043 master-0 kubenswrapper[26053]: E0318 09:10:54.950831 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50e64936-f20b-4d5a-99ec-3264186272a3" containerName="console" Mar 18 09:10:54.951043 master-0 kubenswrapper[26053]: I0318 09:10:54.950850 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="50e64936-f20b-4d5a-99ec-3264186272a3" containerName="console" Mar 18 09:10:54.951252 master-0 kubenswrapper[26053]: I0318 09:10:54.951204 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="50e64936-f20b-4d5a-99ec-3264186272a3" containerName="console" Mar 18 09:10:54.951333 master-0 kubenswrapper[26053]: I0318 09:10:54.951309 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="09e381d6-17ca-4df3-a45f-22b95a1dc12f" containerName="console" Mar 18 09:10:54.952408 master-0 kubenswrapper[26053]: I0318 09:10:54.952350 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-59477995f9-x2ftq" Mar 18 09:10:54.961752 master-0 kubenswrapper[26053]: I0318 09:10:54.961686 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Mar 18 09:10:54.962291 master-0 kubenswrapper[26053]: I0318 09:10:54.961909 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"kube-root-ca.crt" Mar 18 09:10:54.966017 master-0 kubenswrapper[26053]: I0318 09:10:54.965975 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"openshift-service-ca.crt" Mar 18 09:10:54.966139 master-0 kubenswrapper[26053]: I0318 09:10:54.966009 26053 reflector.go:368] Caches populated for *v1.Secret from object-"sushy-emulator"/"os-client-config" Mar 18 09:10:54.967831 master-0 kubenswrapper[26053]: I0318 09:10:54.967741 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-x2ftq"] Mar 18 09:10:55.059795 master-0 kubenswrapper[26053]: I0318 09:10:55.059682 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phm2c\" (UniqueName: \"kubernetes.io/projected/955d8125-124d-461e-9742-93d11cbb85ff-kube-api-access-phm2c\") pod \"sushy-emulator-59477995f9-x2ftq\" (UID: \"955d8125-124d-461e-9742-93d11cbb85ff\") " pod="sushy-emulator/sushy-emulator-59477995f9-x2ftq" Mar 18 09:10:55.059795 master-0 kubenswrapper[26053]: I0318 09:10:55.059795 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/955d8125-124d-461e-9742-93d11cbb85ff-sushy-emulator-config\") pod \"sushy-emulator-59477995f9-x2ftq\" (UID: \"955d8125-124d-461e-9742-93d11cbb85ff\") " pod="sushy-emulator/sushy-emulator-59477995f9-x2ftq" Mar 18 09:10:55.060198 master-0 kubenswrapper[26053]: I0318 09:10:55.060055 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/955d8125-124d-461e-9742-93d11cbb85ff-os-client-config\") pod \"sushy-emulator-59477995f9-x2ftq\" (UID: \"955d8125-124d-461e-9742-93d11cbb85ff\") " pod="sushy-emulator/sushy-emulator-59477995f9-x2ftq" Mar 18 09:10:55.161399 master-0 kubenswrapper[26053]: I0318 09:10:55.161320 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/955d8125-124d-461e-9742-93d11cbb85ff-os-client-config\") pod \"sushy-emulator-59477995f9-x2ftq\" (UID: \"955d8125-124d-461e-9742-93d11cbb85ff\") " pod="sushy-emulator/sushy-emulator-59477995f9-x2ftq" Mar 18 09:10:55.161690 master-0 kubenswrapper[26053]: I0318 09:10:55.161664 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phm2c\" (UniqueName: \"kubernetes.io/projected/955d8125-124d-461e-9742-93d11cbb85ff-kube-api-access-phm2c\") pod \"sushy-emulator-59477995f9-x2ftq\" (UID: \"955d8125-124d-461e-9742-93d11cbb85ff\") " pod="sushy-emulator/sushy-emulator-59477995f9-x2ftq" Mar 18 09:10:55.161826 master-0 kubenswrapper[26053]: I0318 09:10:55.161787 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/955d8125-124d-461e-9742-93d11cbb85ff-sushy-emulator-config\") pod \"sushy-emulator-59477995f9-x2ftq\" (UID: \"955d8125-124d-461e-9742-93d11cbb85ff\") " pod="sushy-emulator/sushy-emulator-59477995f9-x2ftq" Mar 18 09:10:55.163098 master-0 kubenswrapper[26053]: I0318 09:10:55.163042 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/955d8125-124d-461e-9742-93d11cbb85ff-sushy-emulator-config\") pod \"sushy-emulator-59477995f9-x2ftq\" (UID: \"955d8125-124d-461e-9742-93d11cbb85ff\") " pod="sushy-emulator/sushy-emulator-59477995f9-x2ftq" Mar 18 09:10:55.175676 master-0 kubenswrapper[26053]: I0318 09:10:55.174405 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/955d8125-124d-461e-9742-93d11cbb85ff-os-client-config\") pod \"sushy-emulator-59477995f9-x2ftq\" (UID: \"955d8125-124d-461e-9742-93d11cbb85ff\") " pod="sushy-emulator/sushy-emulator-59477995f9-x2ftq" Mar 18 09:10:55.188369 master-0 kubenswrapper[26053]: I0318 09:10:55.188262 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phm2c\" (UniqueName: \"kubernetes.io/projected/955d8125-124d-461e-9742-93d11cbb85ff-kube-api-access-phm2c\") pod \"sushy-emulator-59477995f9-x2ftq\" (UID: \"955d8125-124d-461e-9742-93d11cbb85ff\") " pod="sushy-emulator/sushy-emulator-59477995f9-x2ftq" Mar 18 09:10:55.280115 master-0 kubenswrapper[26053]: I0318 09:10:55.280038 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-59477995f9-x2ftq" Mar 18 09:10:55.815922 master-0 kubenswrapper[26053]: I0318 09:10:55.815858 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-59477995f9-x2ftq"] Mar 18 09:10:55.819942 master-0 kubenswrapper[26053]: W0318 09:10:55.819862 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod955d8125_124d_461e_9742_93d11cbb85ff.slice/crio-a32dad866509c29f30f318e0001f030ab29f9a8b51ac59ed0434960637acd2e6 WatchSource:0}: Error finding container a32dad866509c29f30f318e0001f030ab29f9a8b51ac59ed0434960637acd2e6: Status 404 returned error can't find the container with id a32dad866509c29f30f318e0001f030ab29f9a8b51ac59ed0434960637acd2e6 Mar 18 09:10:56.389056 master-0 kubenswrapper[26053]: I0318 09:10:56.388981 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-59477995f9-x2ftq" event={"ID":"955d8125-124d-461e-9742-93d11cbb85ff","Type":"ContainerStarted","Data":"a32dad866509c29f30f318e0001f030ab29f9a8b51ac59ed0434960637acd2e6"} Mar 18 09:11:05.480938 master-0 kubenswrapper[26053]: I0318 09:11:05.480832 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-59477995f9-x2ftq" event={"ID":"955d8125-124d-461e-9742-93d11cbb85ff","Type":"ContainerStarted","Data":"f15673ca25f44d4e7c767eb7a2f9bd230136d9ae8fbcfedca46691145050072a"} Mar 18 09:11:05.507975 master-0 kubenswrapper[26053]: I0318 09:11:05.507757 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-59477995f9-x2ftq" podStartSLOduration=2.083971342 podStartE2EDuration="11.507731779s" podCreationTimestamp="2026-03-18 09:10:54 +0000 UTC" firstStartedPulling="2026-03-18 09:10:55.82466101 +0000 UTC m=+443.318012421" lastFinishedPulling="2026-03-18 09:11:05.248421467 +0000 UTC m=+452.741772858" observedRunningTime="2026-03-18 09:11:05.502644302 +0000 UTC m=+452.995995703" watchObservedRunningTime="2026-03-18 09:11:05.507731779 +0000 UTC m=+453.001083190" Mar 18 09:11:15.282822 master-0 kubenswrapper[26053]: I0318 09:11:15.282752 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-59477995f9-x2ftq" Mar 18 09:11:15.283797 master-0 kubenswrapper[26053]: I0318 09:11:15.283730 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-59477995f9-x2ftq" Mar 18 09:11:15.295590 master-0 kubenswrapper[26053]: I0318 09:11:15.295169 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-59477995f9-x2ftq" Mar 18 09:11:15.663882 master-0 kubenswrapper[26053]: I0318 09:11:15.663721 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-59477995f9-x2ftq" Mar 18 09:11:17.305079 master-0 kubenswrapper[26053]: I0318 09:11:17.305011 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-865c46fcb5-r7nsh"] Mar 18 09:11:17.306240 master-0 kubenswrapper[26053]: I0318 09:11:17.306186 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:17.327038 master-0 kubenswrapper[26053]: I0318 09:11:17.326974 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-865c46fcb5-r7nsh"] Mar 18 09:11:17.336448 master-0 kubenswrapper[26053]: I0318 09:11:17.336404 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9affd559-9165-4444-90bd-a29ffce19091-console-config\") pod \"console-865c46fcb5-r7nsh\" (UID: \"9affd559-9165-4444-90bd-a29ffce19091\") " pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:17.336674 master-0 kubenswrapper[26053]: I0318 09:11:17.336460 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9affd559-9165-4444-90bd-a29ffce19091-service-ca\") pod \"console-865c46fcb5-r7nsh\" (UID: \"9affd559-9165-4444-90bd-a29ffce19091\") " pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:17.336674 master-0 kubenswrapper[26053]: I0318 09:11:17.336620 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9affd559-9165-4444-90bd-a29ffce19091-console-serving-cert\") pod \"console-865c46fcb5-r7nsh\" (UID: \"9affd559-9165-4444-90bd-a29ffce19091\") " pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:17.336751 master-0 kubenswrapper[26053]: I0318 09:11:17.336735 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9affd559-9165-4444-90bd-a29ffce19091-oauth-serving-cert\") pod \"console-865c46fcb5-r7nsh\" (UID: \"9affd559-9165-4444-90bd-a29ffce19091\") " pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:17.336784 master-0 kubenswrapper[26053]: I0318 09:11:17.336769 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9affd559-9165-4444-90bd-a29ffce19091-console-oauth-config\") pod \"console-865c46fcb5-r7nsh\" (UID: \"9affd559-9165-4444-90bd-a29ffce19091\") " pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:17.336864 master-0 kubenswrapper[26053]: I0318 09:11:17.336810 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9affd559-9165-4444-90bd-a29ffce19091-trusted-ca-bundle\") pod \"console-865c46fcb5-r7nsh\" (UID: \"9affd559-9165-4444-90bd-a29ffce19091\") " pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:17.336864 master-0 kubenswrapper[26053]: I0318 09:11:17.336858 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcrxj\" (UniqueName: \"kubernetes.io/projected/9affd559-9165-4444-90bd-a29ffce19091-kube-api-access-pcrxj\") pod \"console-865c46fcb5-r7nsh\" (UID: \"9affd559-9165-4444-90bd-a29ffce19091\") " pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:17.438675 master-0 kubenswrapper[26053]: I0318 09:11:17.438582 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9affd559-9165-4444-90bd-a29ffce19091-console-config\") pod \"console-865c46fcb5-r7nsh\" (UID: \"9affd559-9165-4444-90bd-a29ffce19091\") " pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:17.438915 master-0 kubenswrapper[26053]: I0318 09:11:17.438704 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9affd559-9165-4444-90bd-a29ffce19091-service-ca\") pod \"console-865c46fcb5-r7nsh\" (UID: \"9affd559-9165-4444-90bd-a29ffce19091\") " pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:17.438915 master-0 kubenswrapper[26053]: I0318 09:11:17.438766 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9affd559-9165-4444-90bd-a29ffce19091-console-serving-cert\") pod \"console-865c46fcb5-r7nsh\" (UID: \"9affd559-9165-4444-90bd-a29ffce19091\") " pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:17.438915 master-0 kubenswrapper[26053]: I0318 09:11:17.438864 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9affd559-9165-4444-90bd-a29ffce19091-oauth-serving-cert\") pod \"console-865c46fcb5-r7nsh\" (UID: \"9affd559-9165-4444-90bd-a29ffce19091\") " pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:17.438915 master-0 kubenswrapper[26053]: I0318 09:11:17.438891 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9affd559-9165-4444-90bd-a29ffce19091-console-oauth-config\") pod \"console-865c46fcb5-r7nsh\" (UID: \"9affd559-9165-4444-90bd-a29ffce19091\") " pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:17.439213 master-0 kubenswrapper[26053]: I0318 09:11:17.438936 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9affd559-9165-4444-90bd-a29ffce19091-trusted-ca-bundle\") pod \"console-865c46fcb5-r7nsh\" (UID: \"9affd559-9165-4444-90bd-a29ffce19091\") " pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:17.439213 master-0 kubenswrapper[26053]: I0318 09:11:17.439151 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcrxj\" (UniqueName: \"kubernetes.io/projected/9affd559-9165-4444-90bd-a29ffce19091-kube-api-access-pcrxj\") pod \"console-865c46fcb5-r7nsh\" (UID: \"9affd559-9165-4444-90bd-a29ffce19091\") " pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:17.439472 master-0 kubenswrapper[26053]: I0318 09:11:17.439423 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9affd559-9165-4444-90bd-a29ffce19091-console-config\") pod \"console-865c46fcb5-r7nsh\" (UID: \"9affd559-9165-4444-90bd-a29ffce19091\") " pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:17.440605 master-0 kubenswrapper[26053]: I0318 09:11:17.440051 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9affd559-9165-4444-90bd-a29ffce19091-trusted-ca-bundle\") pod \"console-865c46fcb5-r7nsh\" (UID: \"9affd559-9165-4444-90bd-a29ffce19091\") " pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:17.441303 master-0 kubenswrapper[26053]: I0318 09:11:17.440723 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9affd559-9165-4444-90bd-a29ffce19091-oauth-serving-cert\") pod \"console-865c46fcb5-r7nsh\" (UID: \"9affd559-9165-4444-90bd-a29ffce19091\") " pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:17.441303 master-0 kubenswrapper[26053]: I0318 09:11:17.440756 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9affd559-9165-4444-90bd-a29ffce19091-service-ca\") pod \"console-865c46fcb5-r7nsh\" (UID: \"9affd559-9165-4444-90bd-a29ffce19091\") " pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:17.443093 master-0 kubenswrapper[26053]: I0318 09:11:17.443038 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9affd559-9165-4444-90bd-a29ffce19091-console-serving-cert\") pod \"console-865c46fcb5-r7nsh\" (UID: \"9affd559-9165-4444-90bd-a29ffce19091\") " pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:17.445875 master-0 kubenswrapper[26053]: I0318 09:11:17.445800 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9affd559-9165-4444-90bd-a29ffce19091-console-oauth-config\") pod \"console-865c46fcb5-r7nsh\" (UID: \"9affd559-9165-4444-90bd-a29ffce19091\") " pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:17.457115 master-0 kubenswrapper[26053]: I0318 09:11:17.457033 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcrxj\" (UniqueName: \"kubernetes.io/projected/9affd559-9165-4444-90bd-a29ffce19091-kube-api-access-pcrxj\") pod \"console-865c46fcb5-r7nsh\" (UID: \"9affd559-9165-4444-90bd-a29ffce19091\") " pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:17.638553 master-0 kubenswrapper[26053]: I0318 09:11:17.625101 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:18.117993 master-0 kubenswrapper[26053]: I0318 09:11:18.117438 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-865c46fcb5-r7nsh"] Mar 18 09:11:18.690797 master-0 kubenswrapper[26053]: I0318 09:11:18.690711 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-865c46fcb5-r7nsh" event={"ID":"9affd559-9165-4444-90bd-a29ffce19091","Type":"ContainerStarted","Data":"a81eaa41983c85febcb38f1959bbe654088ae6c69151d213099db9481cc35e06"} Mar 18 09:11:18.690797 master-0 kubenswrapper[26053]: I0318 09:11:18.690795 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-865c46fcb5-r7nsh" event={"ID":"9affd559-9165-4444-90bd-a29ffce19091","Type":"ContainerStarted","Data":"fd34521142f539b4f3017ee30c7c60773372bf58aacc7053e61a2556b1e81988"} Mar 18 09:11:18.718026 master-0 kubenswrapper[26053]: I0318 09:11:18.717890 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-865c46fcb5-r7nsh" podStartSLOduration=1.717858803 podStartE2EDuration="1.717858803s" podCreationTimestamp="2026-03-18 09:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:11:18.713380042 +0000 UTC m=+466.206731503" watchObservedRunningTime="2026-03-18 09:11:18.717858803 +0000 UTC m=+466.211210214" Mar 18 09:11:27.625649 master-0 kubenswrapper[26053]: I0318 09:11:27.625423 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:27.626643 master-0 kubenswrapper[26053]: I0318 09:11:27.625900 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:27.635857 master-0 kubenswrapper[26053]: I0318 09:11:27.635775 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:27.772109 master-0 kubenswrapper[26053]: I0318 09:11:27.772019 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-865c46fcb5-r7nsh" Mar 18 09:11:27.923977 master-0 kubenswrapper[26053]: I0318 09:11:27.923777 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5b85d959c9-8jjlz"] Mar 18 09:11:31.217647 master-0 kubenswrapper[26053]: I0318 09:11:31.217543 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-6-master-0"] Mar 18 09:11:31.221113 master-0 kubenswrapper[26053]: I0318 09:11:31.221035 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-0" Mar 18 09:11:31.226320 master-0 kubenswrapper[26053]: I0318 09:11:31.226248 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 09:11:31.226756 master-0 kubenswrapper[26053]: I0318 09:11:31.226695 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-v6rhb" Mar 18 09:11:31.232047 master-0 kubenswrapper[26053]: I0318 09:11:31.231938 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-6-master-0"] Mar 18 09:11:31.279145 master-0 kubenswrapper[26053]: I0318 09:11:31.279049 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 18 09:11:31.279466 master-0 kubenswrapper[26053]: I0318 09:11:31.279280 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4-kube-api-access\") pod \"installer-6-master-0\" (UID: \"e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 18 09:11:31.279466 master-0 kubenswrapper[26053]: I0318 09:11:31.279327 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4-var-lock\") pod \"installer-6-master-0\" (UID: \"e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 18 09:11:31.380105 master-0 kubenswrapper[26053]: I0318 09:11:31.380023 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4-kube-api-access\") pod \"installer-6-master-0\" (UID: \"e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 18 09:11:31.380399 master-0 kubenswrapper[26053]: I0318 09:11:31.380117 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4-var-lock\") pod \"installer-6-master-0\" (UID: \"e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 18 09:11:31.380399 master-0 kubenswrapper[26053]: I0318 09:11:31.380227 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 18 09:11:31.380399 master-0 kubenswrapper[26053]: I0318 09:11:31.380364 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4-var-lock\") pod \"installer-6-master-0\" (UID: \"e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 18 09:11:31.380584 master-0 kubenswrapper[26053]: I0318 09:11:31.380409 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 18 09:11:31.408787 master-0 kubenswrapper[26053]: I0318 09:11:31.408717 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4-kube-api-access\") pod \"installer-6-master-0\" (UID: \"e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 18 09:11:31.552297 master-0 kubenswrapper[26053]: I0318 09:11:31.552125 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-0" Mar 18 09:11:32.048975 master-0 kubenswrapper[26053]: I0318 09:11:32.048934 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-6-master-0"] Mar 18 09:11:32.054132 master-0 kubenswrapper[26053]: W0318 09:11:32.054043 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode37bd95a_3bb3_44cc_9008_ac4a2fd9d7d4.slice/crio-957bab5edbbfe034aca7996c67ab6e750ffc8abe4ac9138fcb30f0b1dd28fc70 WatchSource:0}: Error finding container 957bab5edbbfe034aca7996c67ab6e750ffc8abe4ac9138fcb30f0b1dd28fc70: Status 404 returned error can't find the container with id 957bab5edbbfe034aca7996c67ab6e750ffc8abe4ac9138fcb30f0b1dd28fc70 Mar 18 09:11:32.810666 master-0 kubenswrapper[26053]: I0318 09:11:32.810520 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-0" event={"ID":"e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4","Type":"ContainerStarted","Data":"4a62be66ab2422ccf482d097057f6cecaa4e6f9d7c65a0277943c8d0c73eaebd"} Mar 18 09:11:32.810666 master-0 kubenswrapper[26053]: I0318 09:11:32.810671 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-0" event={"ID":"e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4","Type":"ContainerStarted","Data":"957bab5edbbfe034aca7996c67ab6e750ffc8abe4ac9138fcb30f0b1dd28fc70"} Mar 18 09:11:32.836804 master-0 kubenswrapper[26053]: I0318 09:11:32.836685 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-6-master-0" podStartSLOduration=1.836662413 podStartE2EDuration="1.836662413s" podCreationTimestamp="2026-03-18 09:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:11:32.83135057 +0000 UTC m=+480.324701991" watchObservedRunningTime="2026-03-18 09:11:32.836662413 +0000 UTC m=+480.330013814" Mar 18 09:11:36.638947 master-0 kubenswrapper[26053]: I0318 09:11:36.638828 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-poller-58b4c9589-b98wt"] Mar 18 09:11:36.644629 master-0 kubenswrapper[26053]: I0318 09:11:36.644543 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-58b4c9589-b98wt" Mar 18 09:11:36.686764 master-0 kubenswrapper[26053]: I0318 09:11:36.686702 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-58b4c9589-b98wt"] Mar 18 09:11:36.770729 master-0 kubenswrapper[26053]: I0318 09:11:36.770667 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/77c1e3a8-37e4-4c06-b3f8-16aa75ae2665-os-client-config\") pod \"nova-console-poller-58b4c9589-b98wt\" (UID: \"77c1e3a8-37e4-4c06-b3f8-16aa75ae2665\") " pod="sushy-emulator/nova-console-poller-58b4c9589-b98wt" Mar 18 09:11:36.771382 master-0 kubenswrapper[26053]: I0318 09:11:36.771341 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94ltz\" (UniqueName: \"kubernetes.io/projected/77c1e3a8-37e4-4c06-b3f8-16aa75ae2665-kube-api-access-94ltz\") pod \"nova-console-poller-58b4c9589-b98wt\" (UID: \"77c1e3a8-37e4-4c06-b3f8-16aa75ae2665\") " pod="sushy-emulator/nova-console-poller-58b4c9589-b98wt" Mar 18 09:11:36.872807 master-0 kubenswrapper[26053]: I0318 09:11:36.872761 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94ltz\" (UniqueName: \"kubernetes.io/projected/77c1e3a8-37e4-4c06-b3f8-16aa75ae2665-kube-api-access-94ltz\") pod \"nova-console-poller-58b4c9589-b98wt\" (UID: \"77c1e3a8-37e4-4c06-b3f8-16aa75ae2665\") " pod="sushy-emulator/nova-console-poller-58b4c9589-b98wt" Mar 18 09:11:36.873146 master-0 kubenswrapper[26053]: I0318 09:11:36.872876 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/77c1e3a8-37e4-4c06-b3f8-16aa75ae2665-os-client-config\") pod \"nova-console-poller-58b4c9589-b98wt\" (UID: \"77c1e3a8-37e4-4c06-b3f8-16aa75ae2665\") " pod="sushy-emulator/nova-console-poller-58b4c9589-b98wt" Mar 18 09:11:36.878503 master-0 kubenswrapper[26053]: I0318 09:11:36.878432 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/77c1e3a8-37e4-4c06-b3f8-16aa75ae2665-os-client-config\") pod \"nova-console-poller-58b4c9589-b98wt\" (UID: \"77c1e3a8-37e4-4c06-b3f8-16aa75ae2665\") " pod="sushy-emulator/nova-console-poller-58b4c9589-b98wt" Mar 18 09:11:36.891975 master-0 kubenswrapper[26053]: I0318 09:11:36.891885 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94ltz\" (UniqueName: \"kubernetes.io/projected/77c1e3a8-37e4-4c06-b3f8-16aa75ae2665-kube-api-access-94ltz\") pod \"nova-console-poller-58b4c9589-b98wt\" (UID: \"77c1e3a8-37e4-4c06-b3f8-16aa75ae2665\") " pod="sushy-emulator/nova-console-poller-58b4c9589-b98wt" Mar 18 09:11:36.979058 master-0 kubenswrapper[26053]: I0318 09:11:36.978953 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-58b4c9589-b98wt" Mar 18 09:11:37.487813 master-0 kubenswrapper[26053]: W0318 09:11:37.487562 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77c1e3a8_37e4_4c06_b3f8_16aa75ae2665.slice/crio-aece1f940d49f636eb7e32372689bff2f928d2cecf891c6917d94a8e9cf4534a WatchSource:0}: Error finding container aece1f940d49f636eb7e32372689bff2f928d2cecf891c6917d94a8e9cf4534a: Status 404 returned error can't find the container with id aece1f940d49f636eb7e32372689bff2f928d2cecf891c6917d94a8e9cf4534a Mar 18 09:11:37.488211 master-0 kubenswrapper[26053]: I0318 09:11:37.488133 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-58b4c9589-b98wt"] Mar 18 09:11:37.855400 master-0 kubenswrapper[26053]: I0318 09:11:37.855267 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-58b4c9589-b98wt" event={"ID":"77c1e3a8-37e4-4c06-b3f8-16aa75ae2665","Type":"ContainerStarted","Data":"aece1f940d49f636eb7e32372689bff2f928d2cecf891c6917d94a8e9cf4534a"} Mar 18 09:11:43.939124 master-0 kubenswrapper[26053]: I0318 09:11:43.938942 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-58b4c9589-b98wt" event={"ID":"77c1e3a8-37e4-4c06-b3f8-16aa75ae2665","Type":"ContainerStarted","Data":"5fc7359d256325115f3c148243a7b50858ff6c1fa2b2b05b515b6b4accaace8a"} Mar 18 09:11:44.952193 master-0 kubenswrapper[26053]: I0318 09:11:44.952108 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-58b4c9589-b98wt" event={"ID":"77c1e3a8-37e4-4c06-b3f8-16aa75ae2665","Type":"ContainerStarted","Data":"a532b2e5c862446f3ef209198013a2aeb5a2aedeb17ff1f5b02c10dbbc286ace"} Mar 18 09:11:44.979070 master-0 kubenswrapper[26053]: I0318 09:11:44.978959 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-poller-58b4c9589-b98wt" podStartSLOduration=2.017478544 podStartE2EDuration="8.978938573s" podCreationTimestamp="2026-03-18 09:11:36 +0000 UTC" firstStartedPulling="2026-03-18 09:11:37.491696108 +0000 UTC m=+484.985047489" lastFinishedPulling="2026-03-18 09:11:44.453156107 +0000 UTC m=+491.946507518" observedRunningTime="2026-03-18 09:11:44.976307057 +0000 UTC m=+492.469658468" watchObservedRunningTime="2026-03-18 09:11:44.978938573 +0000 UTC m=+492.472289954" Mar 18 09:11:52.962118 master-0 kubenswrapper[26053]: I0318 09:11:52.962046 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5b85d959c9-8jjlz" podUID="ba86cadf-6b4a-4e54-a0ee-c410b4965f7e" containerName="console" containerID="cri-o://b3e9685326276c949bb74ec130c654594ecc555b662f5c7c7729208615892e4f" gracePeriod=15 Mar 18 09:11:53.486037 master-0 kubenswrapper[26053]: I0318 09:11:53.485981 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b85d959c9-8jjlz_ba86cadf-6b4a-4e54-a0ee-c410b4965f7e/console/0.log" Mar 18 09:11:53.486325 master-0 kubenswrapper[26053]: I0318 09:11:53.486067 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:11:53.570934 master-0 kubenswrapper[26053]: I0318 09:11:53.570819 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-service-ca\") pod \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " Mar 18 09:11:53.570934 master-0 kubenswrapper[26053]: I0318 09:11:53.570876 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-oauth-serving-cert\") pod \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " Mar 18 09:11:53.570934 master-0 kubenswrapper[26053]: I0318 09:11:53.570917 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-console-config\") pod \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " Mar 18 09:11:53.571266 master-0 kubenswrapper[26053]: I0318 09:11:53.570993 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j86wt\" (UniqueName: \"kubernetes.io/projected/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-kube-api-access-j86wt\") pod \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " Mar 18 09:11:53.571266 master-0 kubenswrapper[26053]: I0318 09:11:53.571025 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-trusted-ca-bundle\") pod \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " Mar 18 09:11:53.571266 master-0 kubenswrapper[26053]: I0318 09:11:53.571093 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-console-oauth-config\") pod \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " Mar 18 09:11:53.571266 master-0 kubenswrapper[26053]: I0318 09:11:53.571126 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-console-serving-cert\") pod \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\" (UID: \"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e\") " Mar 18 09:11:53.572410 master-0 kubenswrapper[26053]: I0318 09:11:53.572351 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-console-config" (OuterVolumeSpecName: "console-config") pod "ba86cadf-6b4a-4e54-a0ee-c410b4965f7e" (UID: "ba86cadf-6b4a-4e54-a0ee-c410b4965f7e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:11:53.572491 master-0 kubenswrapper[26053]: I0318 09:11:53.572432 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-service-ca" (OuterVolumeSpecName: "service-ca") pod "ba86cadf-6b4a-4e54-a0ee-c410b4965f7e" (UID: "ba86cadf-6b4a-4e54-a0ee-c410b4965f7e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:11:53.572775 master-0 kubenswrapper[26053]: I0318 09:11:53.572556 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ba86cadf-6b4a-4e54-a0ee-c410b4965f7e" (UID: "ba86cadf-6b4a-4e54-a0ee-c410b4965f7e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:11:53.572846 master-0 kubenswrapper[26053]: I0318 09:11:53.572782 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ba86cadf-6b4a-4e54-a0ee-c410b4965f7e" (UID: "ba86cadf-6b4a-4e54-a0ee-c410b4965f7e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 18 09:11:53.575524 master-0 kubenswrapper[26053]: I0318 09:11:53.575458 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ba86cadf-6b4a-4e54-a0ee-c410b4965f7e" (UID: "ba86cadf-6b4a-4e54-a0ee-c410b4965f7e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:11:53.577747 master-0 kubenswrapper[26053]: I0318 09:11:53.577708 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-kube-api-access-j86wt" (OuterVolumeSpecName: "kube-api-access-j86wt") pod "ba86cadf-6b4a-4e54-a0ee-c410b4965f7e" (UID: "ba86cadf-6b4a-4e54-a0ee-c410b4965f7e"). InnerVolumeSpecName "kube-api-access-j86wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:11:53.577830 master-0 kubenswrapper[26053]: I0318 09:11:53.577796 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ba86cadf-6b4a-4e54-a0ee-c410b4965f7e" (UID: "ba86cadf-6b4a-4e54-a0ee-c410b4965f7e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 18 09:11:53.673613 master-0 kubenswrapper[26053]: I0318 09:11:53.673478 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j86wt\" (UniqueName: \"kubernetes.io/projected/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-kube-api-access-j86wt\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:53.673613 master-0 kubenswrapper[26053]: I0318 09:11:53.673520 26053 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:53.673613 master-0 kubenswrapper[26053]: I0318 09:11:53.673530 26053 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:53.673613 master-0 kubenswrapper[26053]: I0318 09:11:53.673541 26053 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:53.673613 master-0 kubenswrapper[26053]: I0318 09:11:53.673553 26053 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:53.673613 master-0 kubenswrapper[26053]: I0318 09:11:53.673581 26053 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:53.673613 master-0 kubenswrapper[26053]: I0318 09:11:53.673591 26053 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e-console-config\") on node \"master-0\" DevicePath \"\"" Mar 18 09:11:54.043585 master-0 kubenswrapper[26053]: I0318 09:11:54.043522 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5b85d959c9-8jjlz_ba86cadf-6b4a-4e54-a0ee-c410b4965f7e/console/0.log" Mar 18 09:11:54.044365 master-0 kubenswrapper[26053]: I0318 09:11:54.043621 26053 generic.go:334] "Generic (PLEG): container finished" podID="ba86cadf-6b4a-4e54-a0ee-c410b4965f7e" containerID="b3e9685326276c949bb74ec130c654594ecc555b662f5c7c7729208615892e4f" exitCode=2 Mar 18 09:11:54.044365 master-0 kubenswrapper[26053]: I0318 09:11:54.043668 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b85d959c9-8jjlz" event={"ID":"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e","Type":"ContainerDied","Data":"b3e9685326276c949bb74ec130c654594ecc555b662f5c7c7729208615892e4f"} Mar 18 09:11:54.044365 master-0 kubenswrapper[26053]: I0318 09:11:54.043700 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5b85d959c9-8jjlz" event={"ID":"ba86cadf-6b4a-4e54-a0ee-c410b4965f7e","Type":"ContainerDied","Data":"e759d50c748b8462befb6b7cb167fdbffe5ef7b304563963661333ab3a016e51"} Mar 18 09:11:54.044365 master-0 kubenswrapper[26053]: I0318 09:11:54.043719 26053 scope.go:117] "RemoveContainer" containerID="b3e9685326276c949bb74ec130c654594ecc555b662f5c7c7729208615892e4f" Mar 18 09:11:54.044365 master-0 kubenswrapper[26053]: I0318 09:11:54.043869 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5b85d959c9-8jjlz" Mar 18 09:11:54.074395 master-0 kubenswrapper[26053]: I0318 09:11:54.074352 26053 scope.go:117] "RemoveContainer" containerID="b3e9685326276c949bb74ec130c654594ecc555b662f5c7c7729208615892e4f" Mar 18 09:11:54.075166 master-0 kubenswrapper[26053]: E0318 09:11:54.075030 26053 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3e9685326276c949bb74ec130c654594ecc555b662f5c7c7729208615892e4f\": container with ID starting with b3e9685326276c949bb74ec130c654594ecc555b662f5c7c7729208615892e4f not found: ID does not exist" containerID="b3e9685326276c949bb74ec130c654594ecc555b662f5c7c7729208615892e4f" Mar 18 09:11:54.075232 master-0 kubenswrapper[26053]: I0318 09:11:54.075175 26053 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3e9685326276c949bb74ec130c654594ecc555b662f5c7c7729208615892e4f"} err="failed to get container status \"b3e9685326276c949bb74ec130c654594ecc555b662f5c7c7729208615892e4f\": rpc error: code = NotFound desc = could not find container \"b3e9685326276c949bb74ec130c654594ecc555b662f5c7c7729208615892e4f\": container with ID starting with b3e9685326276c949bb74ec130c654594ecc555b662f5c7c7729208615892e4f not found: ID does not exist" Mar 18 09:11:54.099814 master-0 kubenswrapper[26053]: I0318 09:11:54.099754 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5b85d959c9-8jjlz"] Mar 18 09:11:54.105581 master-0 kubenswrapper[26053]: I0318 09:11:54.105533 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5b85d959c9-8jjlz"] Mar 18 09:11:54.742116 master-0 kubenswrapper[26053]: I0318 09:11:54.741379 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba86cadf-6b4a-4e54-a0ee-c410b4965f7e" path="/var/lib/kubelet/pods/ba86cadf-6b4a-4e54-a0ee-c410b4965f7e/volumes" Mar 18 09:12:05.247272 master-0 kubenswrapper[26053]: I0318 09:12:05.247148 26053 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:12:05.248351 master-0 kubenswrapper[26053]: I0318 09:12:05.247722 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="60c2ba061fb7c3edad3900526541ee3c" containerName="cluster-policy-controller" containerID="cri-o://95b80d622ddf2ed768357e028eaa3eb8c0cdb8ebe103e34d7e2c03682a426f65" gracePeriod=30 Mar 18 09:12:05.248351 master-0 kubenswrapper[26053]: I0318 09:12:05.247793 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="60c2ba061fb7c3edad3900526541ee3c" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://2eb70f844dc859b9c27f10b4a002866192e0ad65ec1b06f30aaa34b77fb0b7f9" gracePeriod=30 Mar 18 09:12:05.248351 master-0 kubenswrapper[26053]: I0318 09:12:05.247793 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="60c2ba061fb7c3edad3900526541ee3c" containerName="kube-controller-manager" containerID="cri-o://e2312ca4bc36c3067315c67ea5484812afd6cc65bceaf66493a13a06e24d3095" gracePeriod=30 Mar 18 09:12:05.248351 master-0 kubenswrapper[26053]: I0318 09:12:05.247790 26053 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="60c2ba061fb7c3edad3900526541ee3c" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://d679a1297cafaf3badf630142366c09d03cda0e9cd66b05fa66aef0604da0f46" gracePeriod=30 Mar 18 09:12:05.249439 master-0 kubenswrapper[26053]: I0318 09:12:05.249175 26053 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:12:05.249521 master-0 kubenswrapper[26053]: E0318 09:12:05.249496 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c2ba061fb7c3edad3900526541ee3c" containerName="kube-controller-manager" Mar 18 09:12:05.249521 master-0 kubenswrapper[26053]: I0318 09:12:05.249508 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c2ba061fb7c3edad3900526541ee3c" containerName="kube-controller-manager" Mar 18 09:12:05.249521 master-0 kubenswrapper[26053]: E0318 09:12:05.249517 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c2ba061fb7c3edad3900526541ee3c" containerName="kube-controller-manager-recovery-controller" Mar 18 09:12:05.249738 master-0 kubenswrapper[26053]: I0318 09:12:05.249524 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c2ba061fb7c3edad3900526541ee3c" containerName="kube-controller-manager-recovery-controller" Mar 18 09:12:05.249738 master-0 kubenswrapper[26053]: E0318 09:12:05.249535 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c2ba061fb7c3edad3900526541ee3c" containerName="cluster-policy-controller" Mar 18 09:12:05.249738 master-0 kubenswrapper[26053]: I0318 09:12:05.249541 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c2ba061fb7c3edad3900526541ee3c" containerName="cluster-policy-controller" Mar 18 09:12:05.249738 master-0 kubenswrapper[26053]: E0318 09:12:05.249562 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c2ba061fb7c3edad3900526541ee3c" containerName="kube-controller-manager-cert-syncer" Mar 18 09:12:05.249738 master-0 kubenswrapper[26053]: I0318 09:12:05.249583 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c2ba061fb7c3edad3900526541ee3c" containerName="kube-controller-manager-cert-syncer" Mar 18 09:12:05.249738 master-0 kubenswrapper[26053]: E0318 09:12:05.249597 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba86cadf-6b4a-4e54-a0ee-c410b4965f7e" containerName="console" Mar 18 09:12:05.249738 master-0 kubenswrapper[26053]: I0318 09:12:05.249603 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba86cadf-6b4a-4e54-a0ee-c410b4965f7e" containerName="console" Mar 18 09:12:05.249738 master-0 kubenswrapper[26053]: E0318 09:12:05.249618 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c2ba061fb7c3edad3900526541ee3c" containerName="kube-controller-manager" Mar 18 09:12:05.249738 master-0 kubenswrapper[26053]: I0318 09:12:05.249624 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c2ba061fb7c3edad3900526541ee3c" containerName="kube-controller-manager" Mar 18 09:12:05.250406 master-0 kubenswrapper[26053]: I0318 09:12:05.249754 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="60c2ba061fb7c3edad3900526541ee3c" containerName="kube-controller-manager" Mar 18 09:12:05.250406 master-0 kubenswrapper[26053]: I0318 09:12:05.249767 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="60c2ba061fb7c3edad3900526541ee3c" containerName="kube-controller-manager-cert-syncer" Mar 18 09:12:05.250406 master-0 kubenswrapper[26053]: I0318 09:12:05.249792 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba86cadf-6b4a-4e54-a0ee-c410b4965f7e" containerName="console" Mar 18 09:12:05.250406 master-0 kubenswrapper[26053]: I0318 09:12:05.249803 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="60c2ba061fb7c3edad3900526541ee3c" containerName="cluster-policy-controller" Mar 18 09:12:05.250406 master-0 kubenswrapper[26053]: I0318 09:12:05.249817 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="60c2ba061fb7c3edad3900526541ee3c" containerName="kube-controller-manager-recovery-controller" Mar 18 09:12:05.250406 master-0 kubenswrapper[26053]: I0318 09:12:05.250065 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="60c2ba061fb7c3edad3900526541ee3c" containerName="kube-controller-manager" Mar 18 09:12:05.405728 master-0 kubenswrapper[26053]: I0318 09:12:05.405660 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e665517aa7aaa407efaa6a71427f5785-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e665517aa7aaa407efaa6a71427f5785\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:12:05.406044 master-0 kubenswrapper[26053]: I0318 09:12:05.405818 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e665517aa7aaa407efaa6a71427f5785-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e665517aa7aaa407efaa6a71427f5785\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:12:05.446617 master-0 kubenswrapper[26053]: I0318 09:12:05.446525 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_60c2ba061fb7c3edad3900526541ee3c/kube-controller-manager-cert-syncer/0.log" Mar 18 09:12:05.450330 master-0 kubenswrapper[26053]: I0318 09:12:05.450285 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_60c2ba061fb7c3edad3900526541ee3c/kube-controller-manager/0.log" Mar 18 09:12:05.450472 master-0 kubenswrapper[26053]: I0318 09:12:05.450385 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:12:05.454432 master-0 kubenswrapper[26053]: I0318 09:12:05.454363 26053 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="60c2ba061fb7c3edad3900526541ee3c" podUID="e665517aa7aaa407efaa6a71427f5785" Mar 18 09:12:05.508734 master-0 kubenswrapper[26053]: I0318 09:12:05.508537 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e665517aa7aaa407efaa6a71427f5785-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e665517aa7aaa407efaa6a71427f5785\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:12:05.508734 master-0 kubenswrapper[26053]: I0318 09:12:05.508682 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e665517aa7aaa407efaa6a71427f5785-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e665517aa7aaa407efaa6a71427f5785\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:12:05.508983 master-0 kubenswrapper[26053]: I0318 09:12:05.508750 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e665517aa7aaa407efaa6a71427f5785-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e665517aa7aaa407efaa6a71427f5785\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:12:05.508983 master-0 kubenswrapper[26053]: I0318 09:12:05.508869 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e665517aa7aaa407efaa6a71427f5785-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e665517aa7aaa407efaa6a71427f5785\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:12:05.609579 master-0 kubenswrapper[26053]: I0318 09:12:05.609506 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/60c2ba061fb7c3edad3900526541ee3c-resource-dir\") pod \"60c2ba061fb7c3edad3900526541ee3c\" (UID: \"60c2ba061fb7c3edad3900526541ee3c\") " Mar 18 09:12:05.609816 master-0 kubenswrapper[26053]: I0318 09:12:05.609630 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/60c2ba061fb7c3edad3900526541ee3c-cert-dir\") pod \"60c2ba061fb7c3edad3900526541ee3c\" (UID: \"60c2ba061fb7c3edad3900526541ee3c\") " Mar 18 09:12:05.609816 master-0 kubenswrapper[26053]: I0318 09:12:05.609756 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60c2ba061fb7c3edad3900526541ee3c-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "60c2ba061fb7c3edad3900526541ee3c" (UID: "60c2ba061fb7c3edad3900526541ee3c"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:12:05.609987 master-0 kubenswrapper[26053]: I0318 09:12:05.609946 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60c2ba061fb7c3edad3900526541ee3c-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "60c2ba061fb7c3edad3900526541ee3c" (UID: "60c2ba061fb7c3edad3900526541ee3c"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:12:05.610274 master-0 kubenswrapper[26053]: I0318 09:12:05.610235 26053 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/60c2ba061fb7c3edad3900526541ee3c-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:12:05.610326 master-0 kubenswrapper[26053]: I0318 09:12:05.610276 26053 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/60c2ba061fb7c3edad3900526541ee3c-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:12:06.156428 master-0 kubenswrapper[26053]: I0318 09:12:06.156316 26053 generic.go:334] "Generic (PLEG): container finished" podID="e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4" containerID="4a62be66ab2422ccf482d097057f6cecaa4e6f9d7c65a0277943c8d0c73eaebd" exitCode=0 Mar 18 09:12:06.156428 master-0 kubenswrapper[26053]: I0318 09:12:06.156388 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-0" event={"ID":"e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4","Type":"ContainerDied","Data":"4a62be66ab2422ccf482d097057f6cecaa4e6f9d7c65a0277943c8d0c73eaebd"} Mar 18 09:12:06.162930 master-0 kubenswrapper[26053]: I0318 09:12:06.162858 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_60c2ba061fb7c3edad3900526541ee3c/kube-controller-manager-cert-syncer/0.log" Mar 18 09:12:06.164795 master-0 kubenswrapper[26053]: I0318 09:12:06.164745 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_60c2ba061fb7c3edad3900526541ee3c/kube-controller-manager/0.log" Mar 18 09:12:06.164931 master-0 kubenswrapper[26053]: I0318 09:12:06.164832 26053 generic.go:334] "Generic (PLEG): container finished" podID="60c2ba061fb7c3edad3900526541ee3c" containerID="e2312ca4bc36c3067315c67ea5484812afd6cc65bceaf66493a13a06e24d3095" exitCode=0 Mar 18 09:12:06.164931 master-0 kubenswrapper[26053]: I0318 09:12:06.164871 26053 generic.go:334] "Generic (PLEG): container finished" podID="60c2ba061fb7c3edad3900526541ee3c" containerID="d679a1297cafaf3badf630142366c09d03cda0e9cd66b05fa66aef0604da0f46" exitCode=0 Mar 18 09:12:06.164931 master-0 kubenswrapper[26053]: I0318 09:12:06.164893 26053 generic.go:334] "Generic (PLEG): container finished" podID="60c2ba061fb7c3edad3900526541ee3c" containerID="2eb70f844dc859b9c27f10b4a002866192e0ad65ec1b06f30aaa34b77fb0b7f9" exitCode=2 Mar 18 09:12:06.164931 master-0 kubenswrapper[26053]: I0318 09:12:06.164911 26053 generic.go:334] "Generic (PLEG): container finished" podID="60c2ba061fb7c3edad3900526541ee3c" containerID="95b80d622ddf2ed768357e028eaa3eb8c0cdb8ebe103e34d7e2c03682a426f65" exitCode=0 Mar 18 09:12:06.165226 master-0 kubenswrapper[26053]: I0318 09:12:06.164945 26053 scope.go:117] "RemoveContainer" containerID="ed8bdc24b42ed8397f238b0c55ea4555545fbf502b6a47a78f76d63cdd9cc08f" Mar 18 09:12:06.165226 master-0 kubenswrapper[26053]: I0318 09:12:06.165001 26053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26be428103a2972549ffaa2401b0e508a5356808a3733a677148921db330d91e" Mar 18 09:12:06.169934 master-0 kubenswrapper[26053]: I0318 09:12:06.169868 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:12:06.199629 master-0 kubenswrapper[26053]: I0318 09:12:06.194723 26053 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="60c2ba061fb7c3edad3900526541ee3c" podUID="e665517aa7aaa407efaa6a71427f5785" Mar 18 09:12:06.221816 master-0 kubenswrapper[26053]: I0318 09:12:06.221745 26053 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="60c2ba061fb7c3edad3900526541ee3c" podUID="e665517aa7aaa407efaa6a71427f5785" Mar 18 09:12:06.739537 master-0 kubenswrapper[26053]: I0318 09:12:06.739472 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60c2ba061fb7c3edad3900526541ee3c" path="/var/lib/kubelet/pods/60c2ba061fb7c3edad3900526541ee3c/volumes" Mar 18 09:12:07.180814 master-0 kubenswrapper[26053]: I0318 09:12:07.180608 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_60c2ba061fb7c3edad3900526541ee3c/kube-controller-manager-cert-syncer/0.log" Mar 18 09:12:07.621169 master-0 kubenswrapper[26053]: I0318 09:12:07.620961 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-0" Mar 18 09:12:07.750188 master-0 kubenswrapper[26053]: I0318 09:12:07.750073 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4-kube-api-access\") pod \"e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4\" (UID: \"e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4\") " Mar 18 09:12:07.750188 master-0 kubenswrapper[26053]: I0318 09:12:07.750167 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4-var-lock\") pod \"e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4\" (UID: \"e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4\") " Mar 18 09:12:07.751167 master-0 kubenswrapper[26053]: I0318 09:12:07.750252 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4-kubelet-dir\") pod \"e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4\" (UID: \"e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4\") " Mar 18 09:12:07.751167 master-0 kubenswrapper[26053]: I0318 09:12:07.750401 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4-var-lock" (OuterVolumeSpecName: "var-lock") pod "e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4" (UID: "e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:12:07.751167 master-0 kubenswrapper[26053]: I0318 09:12:07.750548 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4" (UID: "e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:12:07.751167 master-0 kubenswrapper[26053]: I0318 09:12:07.751057 26053 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:12:07.751167 master-0 kubenswrapper[26053]: I0318 09:12:07.751099 26053 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 18 09:12:07.754993 master-0 kubenswrapper[26053]: I0318 09:12:07.754911 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4" (UID: "e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:12:07.852622 master-0 kubenswrapper[26053]: I0318 09:12:07.852365 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:12:08.194545 master-0 kubenswrapper[26053]: I0318 09:12:08.194365 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-0" event={"ID":"e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4","Type":"ContainerDied","Data":"957bab5edbbfe034aca7996c67ab6e750ffc8abe4ac9138fcb30f0b1dd28fc70"} Mar 18 09:12:08.194545 master-0 kubenswrapper[26053]: I0318 09:12:08.194445 26053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="957bab5edbbfe034aca7996c67ab6e750ffc8abe4ac9138fcb30f0b1dd28fc70" Mar 18 09:12:08.195150 master-0 kubenswrapper[26053]: I0318 09:12:08.194561 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-0" Mar 18 09:12:17.729290 master-0 kubenswrapper[26053]: I0318 09:12:17.729185 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:12:17.756387 master-0 kubenswrapper[26053]: I0318 09:12:17.756282 26053 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3347a328-a439-478d-91af-9d77e0e8ba99" Mar 18 09:12:17.756387 master-0 kubenswrapper[26053]: I0318 09:12:17.756357 26053 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="3347a328-a439-478d-91af-9d77e0e8ba99" Mar 18 09:12:17.782740 master-0 kubenswrapper[26053]: I0318 09:12:17.782643 26053 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:12:17.793524 master-0 kubenswrapper[26053]: I0318 09:12:17.793435 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:12:17.806534 master-0 kubenswrapper[26053]: I0318 09:12:17.806465 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:12:17.825670 master-0 kubenswrapper[26053]: I0318 09:12:17.825588 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:12:17.832686 master-0 kubenswrapper[26053]: I0318 09:12:17.832598 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 18 09:12:18.772465 master-0 kubenswrapper[26053]: I0318 09:12:18.772377 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e665517aa7aaa407efaa6a71427f5785","Type":"ContainerStarted","Data":"a884293d0a9a9d97ec5e1d8901c1ac63ba6d049a035e7d664b22d9ab19e0af9a"} Mar 18 09:12:18.773190 master-0 kubenswrapper[26053]: I0318 09:12:18.772440 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e665517aa7aaa407efaa6a71427f5785","Type":"ContainerStarted","Data":"688141b7b755232d0fdc5fc01036b07cf7a7e8a846ddb250bf9a42ce10b6f54e"} Mar 18 09:12:18.773190 master-0 kubenswrapper[26053]: I0318 09:12:18.772829 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e665517aa7aaa407efaa6a71427f5785","Type":"ContainerStarted","Data":"abdadf6a879ef37265e4dbd0f91c03b5b36500e9b5224d6d202a7e6bd5256d09"} Mar 18 09:12:18.773190 master-0 kubenswrapper[26053]: I0318 09:12:18.772845 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e665517aa7aaa407efaa6a71427f5785","Type":"ContainerStarted","Data":"1a9fc5d8c082126388094528d20c789047739df1a71b346f7ca4ce7b140d6578"} Mar 18 09:12:19.783637 master-0 kubenswrapper[26053]: I0318 09:12:19.783228 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e665517aa7aaa407efaa6a71427f5785","Type":"ContainerStarted","Data":"608746ac62be92b4d35fc5bfd47c29b3c5b00ad3b28026e3ae88fe94a14fb59d"} Mar 18 09:12:19.809591 master-0 kubenswrapper[26053]: I0318 09:12:19.809471 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.809449671 podStartE2EDuration="2.809449671s" podCreationTimestamp="2026-03-18 09:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:12:19.804355753 +0000 UTC m=+527.297707134" watchObservedRunningTime="2026-03-18 09:12:19.809449671 +0000 UTC m=+527.302801062" Mar 18 09:12:27.808976 master-0 kubenswrapper[26053]: I0318 09:12:27.808887 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:12:27.808976 master-0 kubenswrapper[26053]: I0318 09:12:27.808982 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:12:27.810183 master-0 kubenswrapper[26053]: I0318 09:12:27.809015 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:12:27.810183 master-0 kubenswrapper[26053]: I0318 09:12:27.809040 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:12:27.816129 master-0 kubenswrapper[26053]: I0318 09:12:27.816069 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:12:27.817556 master-0 kubenswrapper[26053]: I0318 09:12:27.817457 26053 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:12:28.887301 master-0 kubenswrapper[26053]: I0318 09:12:28.887196 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:12:28.888815 master-0 kubenswrapper[26053]: I0318 09:12:28.888735 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 18 09:12:31.481322 master-0 kubenswrapper[26053]: I0318 09:12:31.481251 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-6-master-0"] Mar 18 09:12:31.483299 master-0 kubenswrapper[26053]: E0318 09:12:31.483249 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4" containerName="installer" Mar 18 09:12:31.483518 master-0 kubenswrapper[26053]: I0318 09:12:31.483489 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4" containerName="installer" Mar 18 09:12:31.484108 master-0 kubenswrapper[26053]: I0318 09:12:31.484070 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4" containerName="installer" Mar 18 09:12:31.485406 master-0 kubenswrapper[26053]: I0318 09:12:31.485367 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-6-master-0" Mar 18 09:12:31.488353 master-0 kubenswrapper[26053]: I0318 09:12:31.487963 26053 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-v6rhb" Mar 18 09:12:31.488748 master-0 kubenswrapper[26053]: I0318 09:12:31.488717 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 18 09:12:31.490690 master-0 kubenswrapper[26053]: I0318 09:12:31.490612 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-6-master-0"] Mar 18 09:12:31.592846 master-0 kubenswrapper[26053]: I0318 09:12:31.592751 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b427f955-9128-4b6d-a2f1-43297755dc0b-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"b427f955-9128-4b6d-a2f1-43297755dc0b\") " pod="openshift-kube-controller-manager/revision-pruner-6-master-0" Mar 18 09:12:31.593053 master-0 kubenswrapper[26053]: I0318 09:12:31.592879 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b427f955-9128-4b6d-a2f1-43297755dc0b-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"b427f955-9128-4b6d-a2f1-43297755dc0b\") " pod="openshift-kube-controller-manager/revision-pruner-6-master-0" Mar 18 09:12:31.694809 master-0 kubenswrapper[26053]: I0318 09:12:31.694747 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b427f955-9128-4b6d-a2f1-43297755dc0b-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"b427f955-9128-4b6d-a2f1-43297755dc0b\") " pod="openshift-kube-controller-manager/revision-pruner-6-master-0" Mar 18 09:12:31.695034 master-0 kubenswrapper[26053]: I0318 09:12:31.694818 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b427f955-9128-4b6d-a2f1-43297755dc0b-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"b427f955-9128-4b6d-a2f1-43297755dc0b\") " pod="openshift-kube-controller-manager/revision-pruner-6-master-0" Mar 18 09:12:31.695298 master-0 kubenswrapper[26053]: I0318 09:12:31.695237 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b427f955-9128-4b6d-a2f1-43297755dc0b-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"b427f955-9128-4b6d-a2f1-43297755dc0b\") " pod="openshift-kube-controller-manager/revision-pruner-6-master-0" Mar 18 09:12:31.714468 master-0 kubenswrapper[26053]: I0318 09:12:31.714369 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b427f955-9128-4b6d-a2f1-43297755dc0b-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"b427f955-9128-4b6d-a2f1-43297755dc0b\") " pod="openshift-kube-controller-manager/revision-pruner-6-master-0" Mar 18 09:12:31.817397 master-0 kubenswrapper[26053]: I0318 09:12:31.817239 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-6-master-0" Mar 18 09:12:32.303213 master-0 kubenswrapper[26053]: I0318 09:12:32.303158 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-6-master-0"] Mar 18 09:12:32.927720 master-0 kubenswrapper[26053]: I0318 09:12:32.927512 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-6-master-0" event={"ID":"b427f955-9128-4b6d-a2f1-43297755dc0b","Type":"ContainerStarted","Data":"f11a838cc8b7d564a007b5f4cc4e5939639bd1d38a158b465c005fafb9b2d006"} Mar 18 09:12:32.927720 master-0 kubenswrapper[26053]: I0318 09:12:32.927583 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-6-master-0" event={"ID":"b427f955-9128-4b6d-a2f1-43297755dc0b","Type":"ContainerStarted","Data":"9a3775955795ab88f172e235b2788e6c4f7946e9cfa8b65be352e33fd8238eaf"} Mar 18 09:12:32.956966 master-0 kubenswrapper[26053]: I0318 09:12:32.956820 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-6-master-0" podStartSLOduration=1.956793437 podStartE2EDuration="1.956793437s" podCreationTimestamp="2026-03-18 09:12:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:12:32.951925276 +0000 UTC m=+540.445276687" watchObservedRunningTime="2026-03-18 09:12:32.956793437 +0000 UTC m=+540.450144858" Mar 18 09:12:33.941137 master-0 kubenswrapper[26053]: I0318 09:12:33.941072 26053 generic.go:334] "Generic (PLEG): container finished" podID="b427f955-9128-4b6d-a2f1-43297755dc0b" containerID="f11a838cc8b7d564a007b5f4cc4e5939639bd1d38a158b465c005fafb9b2d006" exitCode=0 Mar 18 09:12:33.942028 master-0 kubenswrapper[26053]: I0318 09:12:33.941171 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-6-master-0" event={"ID":"b427f955-9128-4b6d-a2f1-43297755dc0b","Type":"ContainerDied","Data":"f11a838cc8b7d564a007b5f4cc4e5939639bd1d38a158b465c005fafb9b2d006"} Mar 18 09:12:35.325308 master-0 kubenswrapper[26053]: I0318 09:12:35.325258 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-6-master-0" Mar 18 09:12:35.457359 master-0 kubenswrapper[26053]: I0318 09:12:35.457288 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b427f955-9128-4b6d-a2f1-43297755dc0b-kube-api-access\") pod \"b427f955-9128-4b6d-a2f1-43297755dc0b\" (UID: \"b427f955-9128-4b6d-a2f1-43297755dc0b\") " Mar 18 09:12:35.457359 master-0 kubenswrapper[26053]: I0318 09:12:35.457345 26053 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b427f955-9128-4b6d-a2f1-43297755dc0b-kubelet-dir\") pod \"b427f955-9128-4b6d-a2f1-43297755dc0b\" (UID: \"b427f955-9128-4b6d-a2f1-43297755dc0b\") " Mar 18 09:12:35.457904 master-0 kubenswrapper[26053]: I0318 09:12:35.457865 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b427f955-9128-4b6d-a2f1-43297755dc0b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b427f955-9128-4b6d-a2f1-43297755dc0b" (UID: "b427f955-9128-4b6d-a2f1-43297755dc0b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 18 09:12:35.472642 master-0 kubenswrapper[26053]: I0318 09:12:35.472260 26053 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b427f955-9128-4b6d-a2f1-43297755dc0b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b427f955-9128-4b6d-a2f1-43297755dc0b" (UID: "b427f955-9128-4b6d-a2f1-43297755dc0b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 18 09:12:35.559511 master-0 kubenswrapper[26053]: I0318 09:12:35.559379 26053 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b427f955-9128-4b6d-a2f1-43297755dc0b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 18 09:12:35.559511 master-0 kubenswrapper[26053]: I0318 09:12:35.559422 26053 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b427f955-9128-4b6d-a2f1-43297755dc0b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 18 09:12:35.956301 master-0 kubenswrapper[26053]: I0318 09:12:35.956244 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-6-master-0" event={"ID":"b427f955-9128-4b6d-a2f1-43297755dc0b","Type":"ContainerDied","Data":"9a3775955795ab88f172e235b2788e6c4f7946e9cfa8b65be352e33fd8238eaf"} Mar 18 09:12:35.956301 master-0 kubenswrapper[26053]: I0318 09:12:35.956298 26053 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a3775955795ab88f172e235b2788e6c4f7946e9cfa8b65be352e33fd8238eaf" Mar 18 09:12:35.956583 master-0 kubenswrapper[26053]: I0318 09:12:35.956363 26053 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-6-master-0" Mar 18 09:12:37.162844 master-0 kubenswrapper[26053]: I0318 09:12:37.162793 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-recorder-75f56d88f-cbf8h"] Mar 18 09:12:37.163816 master-0 kubenswrapper[26053]: E0318 09:12:37.163794 26053 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b427f955-9128-4b6d-a2f1-43297755dc0b" containerName="pruner" Mar 18 09:12:37.163917 master-0 kubenswrapper[26053]: I0318 09:12:37.163903 26053 state_mem.go:107] "Deleted CPUSet assignment" podUID="b427f955-9128-4b6d-a2f1-43297755dc0b" containerName="pruner" Mar 18 09:12:37.164193 master-0 kubenswrapper[26053]: I0318 09:12:37.164174 26053 memory_manager.go:354] "RemoveStaleState removing state" podUID="b427f955-9128-4b6d-a2f1-43297755dc0b" containerName="pruner" Mar 18 09:12:37.165232 master-0 kubenswrapper[26053]: I0318 09:12:37.165208 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-75f56d88f-cbf8h" Mar 18 09:12:37.184765 master-0 kubenswrapper[26053]: I0318 09:12:37.184714 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-75f56d88f-cbf8h"] Mar 18 09:12:37.288071 master-0 kubenswrapper[26053]: I0318 09:12:37.288024 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/3c30b8c4-b79f-417d-9d8d-265040e7c12a-nova-console-recordings-pv\") pod \"nova-console-recorder-75f56d88f-cbf8h\" (UID: \"3c30b8c4-b79f-417d-9d8d-265040e7c12a\") " pod="sushy-emulator/nova-console-recorder-75f56d88f-cbf8h" Mar 18 09:12:37.288292 master-0 kubenswrapper[26053]: I0318 09:12:37.288106 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx6gl\" (UniqueName: \"kubernetes.io/projected/3c30b8c4-b79f-417d-9d8d-265040e7c12a-kube-api-access-nx6gl\") pod \"nova-console-recorder-75f56d88f-cbf8h\" (UID: \"3c30b8c4-b79f-417d-9d8d-265040e7c12a\") " pod="sushy-emulator/nova-console-recorder-75f56d88f-cbf8h" Mar 18 09:12:37.288292 master-0 kubenswrapper[26053]: I0318 09:12:37.288133 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/3c30b8c4-b79f-417d-9d8d-265040e7c12a-os-client-config\") pod \"nova-console-recorder-75f56d88f-cbf8h\" (UID: \"3c30b8c4-b79f-417d-9d8d-265040e7c12a\") " pod="sushy-emulator/nova-console-recorder-75f56d88f-cbf8h" Mar 18 09:12:37.389549 master-0 kubenswrapper[26053]: I0318 09:12:37.389482 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/3c30b8c4-b79f-417d-9d8d-265040e7c12a-nova-console-recordings-pv\") pod \"nova-console-recorder-75f56d88f-cbf8h\" (UID: \"3c30b8c4-b79f-417d-9d8d-265040e7c12a\") " pod="sushy-emulator/nova-console-recorder-75f56d88f-cbf8h" Mar 18 09:12:37.389819 master-0 kubenswrapper[26053]: I0318 09:12:37.389677 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx6gl\" (UniqueName: \"kubernetes.io/projected/3c30b8c4-b79f-417d-9d8d-265040e7c12a-kube-api-access-nx6gl\") pod \"nova-console-recorder-75f56d88f-cbf8h\" (UID: \"3c30b8c4-b79f-417d-9d8d-265040e7c12a\") " pod="sushy-emulator/nova-console-recorder-75f56d88f-cbf8h" Mar 18 09:12:37.389819 master-0 kubenswrapper[26053]: I0318 09:12:37.389705 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/3c30b8c4-b79f-417d-9d8d-265040e7c12a-os-client-config\") pod \"nova-console-recorder-75f56d88f-cbf8h\" (UID: \"3c30b8c4-b79f-417d-9d8d-265040e7c12a\") " pod="sushy-emulator/nova-console-recorder-75f56d88f-cbf8h" Mar 18 09:12:37.393230 master-0 kubenswrapper[26053]: I0318 09:12:37.393198 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/3c30b8c4-b79f-417d-9d8d-265040e7c12a-os-client-config\") pod \"nova-console-recorder-75f56d88f-cbf8h\" (UID: \"3c30b8c4-b79f-417d-9d8d-265040e7c12a\") " pod="sushy-emulator/nova-console-recorder-75f56d88f-cbf8h" Mar 18 09:12:37.411972 master-0 kubenswrapper[26053]: I0318 09:12:37.411924 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx6gl\" (UniqueName: \"kubernetes.io/projected/3c30b8c4-b79f-417d-9d8d-265040e7c12a-kube-api-access-nx6gl\") pod \"nova-console-recorder-75f56d88f-cbf8h\" (UID: \"3c30b8c4-b79f-417d-9d8d-265040e7c12a\") " pod="sushy-emulator/nova-console-recorder-75f56d88f-cbf8h" Mar 18 09:12:38.090092 master-0 kubenswrapper[26053]: I0318 09:12:38.089837 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-console-recordings-pv\" (UniqueName: \"kubernetes.io/nfs/3c30b8c4-b79f-417d-9d8d-265040e7c12a-nova-console-recordings-pv\") pod \"nova-console-recorder-75f56d88f-cbf8h\" (UID: \"3c30b8c4-b79f-417d-9d8d-265040e7c12a\") " pod="sushy-emulator/nova-console-recorder-75f56d88f-cbf8h" Mar 18 09:12:38.385399 master-0 kubenswrapper[26053]: I0318 09:12:38.385261 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-recorder-75f56d88f-cbf8h" Mar 18 09:12:38.819204 master-0 kubenswrapper[26053]: I0318 09:12:38.819144 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-recorder-75f56d88f-cbf8h"] Mar 18 09:12:38.983174 master-0 kubenswrapper[26053]: I0318 09:12:38.983085 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-75f56d88f-cbf8h" event={"ID":"3c30b8c4-b79f-417d-9d8d-265040e7c12a","Type":"ContainerStarted","Data":"687fbcbc35f672bdce049cc6a6b0499f7f68b7ed8899e08430b788d2402db0bb"} Mar 18 09:12:39.447456 master-0 kubenswrapper[26053]: I0318 09:12:39.447378 26053 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 09:12:39.456327 master-0 kubenswrapper[26053]: I0318 09:12:39.456260 26053 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 18 09:12:40.747966 master-0 kubenswrapper[26053]: I0318 09:12:40.747909 26053 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b75d3625-4131-465d-a8e2-4c42588c7630" path="/var/lib/kubelet/pods/b75d3625-4131-465d-a8e2-4c42588c7630/volumes" Mar 18 09:12:45.972146 master-0 kubenswrapper[26053]: I0318 09:12:45.972069 26053 scope.go:117] "RemoveContainer" containerID="f10ab16270a7803054be2d271744f71e45d5e3fab77e472706ee3fb055b353ea" Mar 18 09:12:50.112316 master-0 kubenswrapper[26053]: I0318 09:12:50.112223 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-75f56d88f-cbf8h" event={"ID":"3c30b8c4-b79f-417d-9d8d-265040e7c12a","Type":"ContainerStarted","Data":"5890d525944f68211adb3d60dd45b4fb0d7edd1eae865f478f03f2f342567315"} Mar 18 09:12:51.119557 master-0 kubenswrapper[26053]: I0318 09:12:51.119489 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-recorder-75f56d88f-cbf8h" event={"ID":"3c30b8c4-b79f-417d-9d8d-265040e7c12a","Type":"ContainerStarted","Data":"50aa90b0c1f1d2e42512570297da8db157359794dbcb2834be08d51309f48f0a"} Mar 18 09:12:51.152293 master-0 kubenswrapper[26053]: I0318 09:12:51.152148 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-recorder-75f56d88f-cbf8h" podStartSLOduration=3.040207279 podStartE2EDuration="14.152047096s" podCreationTimestamp="2026-03-18 09:12:37 +0000 UTC" firstStartedPulling="2026-03-18 09:12:38.819330518 +0000 UTC m=+546.312681899" lastFinishedPulling="2026-03-18 09:12:49.931170295 +0000 UTC m=+557.424521716" observedRunningTime="2026-03-18 09:12:51.140120717 +0000 UTC m=+558.633472148" watchObservedRunningTime="2026-03-18 09:12:51.152047096 +0000 UTC m=+558.645398517" Mar 18 09:13:02.429726 master-0 kubenswrapper[26053]: I0318 09:13:02.429499 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fvl8d/must-gather-42mv2"] Mar 18 09:13:02.431535 master-0 kubenswrapper[26053]: I0318 09:13:02.431477 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fvl8d/must-gather-42mv2" Mar 18 09:13:02.434320 master-0 kubenswrapper[26053]: I0318 09:13:02.434269 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fvl8d"/"kube-root-ca.crt" Mar 18 09:13:02.435218 master-0 kubenswrapper[26053]: I0318 09:13:02.435153 26053 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-fvl8d"/"openshift-service-ca.crt" Mar 18 09:13:02.439446 master-0 kubenswrapper[26053]: I0318 09:13:02.439375 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fvl8d/must-gather-l28gx"] Mar 18 09:13:02.441497 master-0 kubenswrapper[26053]: I0318 09:13:02.441435 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fvl8d/must-gather-l28gx" Mar 18 09:13:02.457035 master-0 kubenswrapper[26053]: I0318 09:13:02.456931 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fvl8d/must-gather-42mv2"] Mar 18 09:13:02.489804 master-0 kubenswrapper[26053]: I0318 09:13:02.465927 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fvl8d/must-gather-l28gx"] Mar 18 09:13:02.557209 master-0 kubenswrapper[26053]: I0318 09:13:02.557128 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/39c506c0-b1f5-46c3-8128-0f0dd0be9b20-must-gather-output\") pod \"must-gather-l28gx\" (UID: \"39c506c0-b1f5-46c3-8128-0f0dd0be9b20\") " pod="openshift-must-gather-fvl8d/must-gather-l28gx" Mar 18 09:13:02.557209 master-0 kubenswrapper[26053]: I0318 09:13:02.557211 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dvrh\" (UniqueName: \"kubernetes.io/projected/038a46f0-0592-4e0e-ac3a-bb12cf869615-kube-api-access-5dvrh\") pod \"must-gather-42mv2\" (UID: \"038a46f0-0592-4e0e-ac3a-bb12cf869615\") " pod="openshift-must-gather-fvl8d/must-gather-42mv2" Mar 18 09:13:02.557495 master-0 kubenswrapper[26053]: I0318 09:13:02.557258 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/038a46f0-0592-4e0e-ac3a-bb12cf869615-must-gather-output\") pod \"must-gather-42mv2\" (UID: \"038a46f0-0592-4e0e-ac3a-bb12cf869615\") " pod="openshift-must-gather-fvl8d/must-gather-42mv2" Mar 18 09:13:02.557495 master-0 kubenswrapper[26053]: I0318 09:13:02.557311 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8wpl\" (UniqueName: \"kubernetes.io/projected/39c506c0-b1f5-46c3-8128-0f0dd0be9b20-kube-api-access-s8wpl\") pod \"must-gather-l28gx\" (UID: \"39c506c0-b1f5-46c3-8128-0f0dd0be9b20\") " pod="openshift-must-gather-fvl8d/must-gather-l28gx" Mar 18 09:13:02.659136 master-0 kubenswrapper[26053]: I0318 09:13:02.659061 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/038a46f0-0592-4e0e-ac3a-bb12cf869615-must-gather-output\") pod \"must-gather-42mv2\" (UID: \"038a46f0-0592-4e0e-ac3a-bb12cf869615\") " pod="openshift-must-gather-fvl8d/must-gather-42mv2" Mar 18 09:13:02.659347 master-0 kubenswrapper[26053]: I0318 09:13:02.659161 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8wpl\" (UniqueName: \"kubernetes.io/projected/39c506c0-b1f5-46c3-8128-0f0dd0be9b20-kube-api-access-s8wpl\") pod \"must-gather-l28gx\" (UID: \"39c506c0-b1f5-46c3-8128-0f0dd0be9b20\") " pod="openshift-must-gather-fvl8d/must-gather-l28gx" Mar 18 09:13:02.659347 master-0 kubenswrapper[26053]: I0318 09:13:02.659241 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/39c506c0-b1f5-46c3-8128-0f0dd0be9b20-must-gather-output\") pod \"must-gather-l28gx\" (UID: \"39c506c0-b1f5-46c3-8128-0f0dd0be9b20\") " pod="openshift-must-gather-fvl8d/must-gather-l28gx" Mar 18 09:13:02.659797 master-0 kubenswrapper[26053]: I0318 09:13:02.659765 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/39c506c0-b1f5-46c3-8128-0f0dd0be9b20-must-gather-output\") pod \"must-gather-l28gx\" (UID: \"39c506c0-b1f5-46c3-8128-0f0dd0be9b20\") " pod="openshift-must-gather-fvl8d/must-gather-l28gx" Mar 18 09:13:02.659853 master-0 kubenswrapper[26053]: I0318 09:13:02.659390 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dvrh\" (UniqueName: \"kubernetes.io/projected/038a46f0-0592-4e0e-ac3a-bb12cf869615-kube-api-access-5dvrh\") pod \"must-gather-42mv2\" (UID: \"038a46f0-0592-4e0e-ac3a-bb12cf869615\") " pod="openshift-must-gather-fvl8d/must-gather-42mv2" Mar 18 09:13:02.660151 master-0 kubenswrapper[26053]: I0318 09:13:02.660118 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/038a46f0-0592-4e0e-ac3a-bb12cf869615-must-gather-output\") pod \"must-gather-42mv2\" (UID: \"038a46f0-0592-4e0e-ac3a-bb12cf869615\") " pod="openshift-must-gather-fvl8d/must-gather-42mv2" Mar 18 09:13:02.675192 master-0 kubenswrapper[26053]: I0318 09:13:02.675146 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dvrh\" (UniqueName: \"kubernetes.io/projected/038a46f0-0592-4e0e-ac3a-bb12cf869615-kube-api-access-5dvrh\") pod \"must-gather-42mv2\" (UID: \"038a46f0-0592-4e0e-ac3a-bb12cf869615\") " pod="openshift-must-gather-fvl8d/must-gather-42mv2" Mar 18 09:13:02.677794 master-0 kubenswrapper[26053]: I0318 09:13:02.677757 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8wpl\" (UniqueName: \"kubernetes.io/projected/39c506c0-b1f5-46c3-8128-0f0dd0be9b20-kube-api-access-s8wpl\") pod \"must-gather-l28gx\" (UID: \"39c506c0-b1f5-46c3-8128-0f0dd0be9b20\") " pod="openshift-must-gather-fvl8d/must-gather-l28gx" Mar 18 09:13:02.753797 master-0 kubenswrapper[26053]: I0318 09:13:02.753727 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fvl8d/must-gather-42mv2" Mar 18 09:13:02.763021 master-0 kubenswrapper[26053]: I0318 09:13:02.762952 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fvl8d/must-gather-l28gx" Mar 18 09:13:03.198957 master-0 kubenswrapper[26053]: I0318 09:13:03.197682 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fvl8d/must-gather-42mv2"] Mar 18 09:13:03.206609 master-0 kubenswrapper[26053]: W0318 09:13:03.206484 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod038a46f0_0592_4e0e_ac3a_bb12cf869615.slice/crio-2ae526e4f4c6c24135a24ac94caabd955996986399a2b4f1fb09875031be24c4 WatchSource:0}: Error finding container 2ae526e4f4c6c24135a24ac94caabd955996986399a2b4f1fb09875031be24c4: Status 404 returned error can't find the container with id 2ae526e4f4c6c24135a24ac94caabd955996986399a2b4f1fb09875031be24c4 Mar 18 09:13:03.236148 master-0 kubenswrapper[26053]: I0318 09:13:03.236090 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fvl8d/must-gather-42mv2" event={"ID":"038a46f0-0592-4e0e-ac3a-bb12cf869615","Type":"ContainerStarted","Data":"2ae526e4f4c6c24135a24ac94caabd955996986399a2b4f1fb09875031be24c4"} Mar 18 09:13:03.268541 master-0 kubenswrapper[26053]: I0318 09:13:03.268467 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fvl8d/must-gather-l28gx"] Mar 18 09:13:03.273675 master-0 kubenswrapper[26053]: W0318 09:13:03.272972 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39c506c0_b1f5_46c3_8128_0f0dd0be9b20.slice/crio-a16a440bb1cab36738f2bba681ea0354bbb4771b71e410f109f153b5c5aa47da WatchSource:0}: Error finding container a16a440bb1cab36738f2bba681ea0354bbb4771b71e410f109f153b5c5aa47da: Status 404 returned error can't find the container with id a16a440bb1cab36738f2bba681ea0354bbb4771b71e410f109f153b5c5aa47da Mar 18 09:13:04.246528 master-0 kubenswrapper[26053]: I0318 09:13:04.246464 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fvl8d/must-gather-l28gx" event={"ID":"39c506c0-b1f5-46c3-8128-0f0dd0be9b20","Type":"ContainerStarted","Data":"a16a440bb1cab36738f2bba681ea0354bbb4771b71e410f109f153b5c5aa47da"} Mar 18 09:13:05.256286 master-0 kubenswrapper[26053]: I0318 09:13:05.256208 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fvl8d/must-gather-42mv2" event={"ID":"038a46f0-0592-4e0e-ac3a-bb12cf869615","Type":"ContainerStarted","Data":"8f491ff28fce90fdcd596478e69e949656a49120b389e0691c9dd04fb4a6b10c"} Mar 18 09:13:05.256286 master-0 kubenswrapper[26053]: I0318 09:13:05.256287 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fvl8d/must-gather-42mv2" event={"ID":"038a46f0-0592-4e0e-ac3a-bb12cf869615","Type":"ContainerStarted","Data":"27bf15e52eacc743c57f9a85a0e1b6fd700e6df2479e79db091e0aa957602929"} Mar 18 09:13:05.279266 master-0 kubenswrapper[26053]: I0318 09:13:05.279199 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fvl8d/must-gather-42mv2" podStartSLOduration=2.169725466 podStartE2EDuration="3.279177668s" podCreationTimestamp="2026-03-18 09:13:02 +0000 UTC" firstStartedPulling="2026-03-18 09:13:03.212880356 +0000 UTC m=+570.706231747" lastFinishedPulling="2026-03-18 09:13:04.322332568 +0000 UTC m=+571.815683949" observedRunningTime="2026-03-18 09:13:05.271322451 +0000 UTC m=+572.764673832" watchObservedRunningTime="2026-03-18 09:13:05.279177668 +0000 UTC m=+572.772529049" Mar 18 09:13:07.541622 master-0 kubenswrapper[26053]: I0318 09:13:07.540994 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-7d58488df-q58jp_9cc640bf-cb5f-4493-b47b-6ea6f524525e/cluster-version-operator/0.log" Mar 18 09:13:10.413716 master-0 kubenswrapper[26053]: I0318 09:13:10.413642 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-688488c6-pgjmr_1039a3d2-df65-4e8b-85b1-4f99469f5459/oauth-openshift/0.log" Mar 18 09:13:10.625168 master-0 kubenswrapper[26053]: I0318 09:13:10.625113 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcdctl/0.log" Mar 18 09:13:10.824004 master-0 kubenswrapper[26053]: I0318 09:13:10.823679 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd/0.log" Mar 18 09:13:10.836885 master-0 kubenswrapper[26053]: I0318 09:13:10.836849 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-metrics/0.log" Mar 18 09:13:10.852356 master-0 kubenswrapper[26053]: I0318 09:13:10.852328 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-readyz/0.log" Mar 18 09:13:10.867245 master-0 kubenswrapper[26053]: I0318 09:13:10.867201 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-rev/0.log" Mar 18 09:13:10.889896 master-0 kubenswrapper[26053]: I0318 09:13:10.889648 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/setup/0.log" Mar 18 09:13:10.903749 master-0 kubenswrapper[26053]: I0318 09:13:10.901246 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-ensure-env-vars/0.log" Mar 18 09:13:10.923732 master-0 kubenswrapper[26053]: I0318 09:13:10.923687 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-resources-copy/0.log" Mar 18 09:13:10.983780 master-0 kubenswrapper[26053]: I0318 09:13:10.983600 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_c393a935-1821-4742-b1bb-0ee52ada5434/installer/0.log" Mar 18 09:13:11.034313 master-0 kubenswrapper[26053]: I0318 09:13:11.034269 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-j75sc_e86268c9-7a83-4ccb-979a-feff00cb4b3e/authentication-operator/1.log" Mar 18 09:13:11.034786 master-0 kubenswrapper[26053]: I0318 09:13:11.034719 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_08e4bcfe-d6ca-4799-9431-682673fe7380/installer/0.log" Mar 18 09:13:11.097904 master-0 kubenswrapper[26053]: I0318 09:13:11.097780 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-j75sc_e86268c9-7a83-4ccb-979a-feff00cb4b3e/authentication-operator/2.log" Mar 18 09:13:11.921399 master-0 kubenswrapper[26053]: I0318 09:13:11.921349 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-tjfg6_0c9de07b-1ef1-4228-b310-1007d999dc7b/assisted-installer-controller/0.log" Mar 18 09:13:11.966623 master-0 kubenswrapper[26053]: I0318 09:13:11.966219 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7dcf5569b5-sgsmn_93cb5ef1-e8f1-4d11-8c93-1abf24626176/router/3.log" Mar 18 09:13:11.980438 master-0 kubenswrapper[26053]: I0318 09:13:11.979620 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7dcf5569b5-sgsmn_93cb5ef1-e8f1-4d11-8c93-1abf24626176/router/2.log" Mar 18 09:13:12.507844 master-0 kubenswrapper[26053]: I0318 09:13:12.507789 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-6ff67f5cc6-vg6s9_15b6612f-3a51-4a67-a566-8c520f85c6c2/oauth-apiserver/0.log" Mar 18 09:13:12.517341 master-0 kubenswrapper[26053]: I0318 09:13:12.516772 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-6ff67f5cc6-vg6s9_15b6612f-3a51-4a67-a566-8c520f85c6c2/fix-audit-permissions/0.log" Mar 18 09:13:12.638738 master-0 kubenswrapper[26053]: I0318 09:13:12.638650 26053 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t"] Mar 18 09:13:12.639717 master-0 kubenswrapper[26053]: I0318 09:13:12.639691 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t" Mar 18 09:13:12.663186 master-0 kubenswrapper[26053]: I0318 09:13:12.663145 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t"] Mar 18 09:13:12.741737 master-0 kubenswrapper[26053]: I0318 09:13:12.741668 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/cc170cbd-c320-48b1-875e-24a52bf088d7-proc\") pod \"perf-node-gather-daemonset-xfs9t\" (UID: \"cc170cbd-c320-48b1-875e-24a52bf088d7\") " pod="openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t" Mar 18 09:13:12.741968 master-0 kubenswrapper[26053]: I0318 09:13:12.741814 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffw79\" (UniqueName: \"kubernetes.io/projected/cc170cbd-c320-48b1-875e-24a52bf088d7-kube-api-access-ffw79\") pod \"perf-node-gather-daemonset-xfs9t\" (UID: \"cc170cbd-c320-48b1-875e-24a52bf088d7\") " pod="openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t" Mar 18 09:13:12.741968 master-0 kubenswrapper[26053]: I0318 09:13:12.741875 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc170cbd-c320-48b1-875e-24a52bf088d7-lib-modules\") pod \"perf-node-gather-daemonset-xfs9t\" (UID: \"cc170cbd-c320-48b1-875e-24a52bf088d7\") " pod="openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t" Mar 18 09:13:12.741968 master-0 kubenswrapper[26053]: I0318 09:13:12.741950 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/cc170cbd-c320-48b1-875e-24a52bf088d7-podres\") pod \"perf-node-gather-daemonset-xfs9t\" (UID: \"cc170cbd-c320-48b1-875e-24a52bf088d7\") " pod="openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t" Mar 18 09:13:12.742101 master-0 kubenswrapper[26053]: I0318 09:13:12.742018 26053 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cc170cbd-c320-48b1-875e-24a52bf088d7-sys\") pod \"perf-node-gather-daemonset-xfs9t\" (UID: \"cc170cbd-c320-48b1-875e-24a52bf088d7\") " pod="openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t" Mar 18 09:13:12.843980 master-0 kubenswrapper[26053]: I0318 09:13:12.843852 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cc170cbd-c320-48b1-875e-24a52bf088d7-sys\") pod \"perf-node-gather-daemonset-xfs9t\" (UID: \"cc170cbd-c320-48b1-875e-24a52bf088d7\") " pod="openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t" Mar 18 09:13:12.844172 master-0 kubenswrapper[26053]: I0318 09:13:12.844028 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/cc170cbd-c320-48b1-875e-24a52bf088d7-proc\") pod \"perf-node-gather-daemonset-xfs9t\" (UID: \"cc170cbd-c320-48b1-875e-24a52bf088d7\") " pod="openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t" Mar 18 09:13:12.844172 master-0 kubenswrapper[26053]: I0318 09:13:12.844044 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/cc170cbd-c320-48b1-875e-24a52bf088d7-sys\") pod \"perf-node-gather-daemonset-xfs9t\" (UID: \"cc170cbd-c320-48b1-875e-24a52bf088d7\") " pod="openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t" Mar 18 09:13:12.844239 master-0 kubenswrapper[26053]: I0318 09:13:12.844187 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffw79\" (UniqueName: \"kubernetes.io/projected/cc170cbd-c320-48b1-875e-24a52bf088d7-kube-api-access-ffw79\") pod \"perf-node-gather-daemonset-xfs9t\" (UID: \"cc170cbd-c320-48b1-875e-24a52bf088d7\") " pod="openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t" Mar 18 09:13:12.844368 master-0 kubenswrapper[26053]: I0318 09:13:12.844329 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc170cbd-c320-48b1-875e-24a52bf088d7-lib-modules\") pod \"perf-node-gather-daemonset-xfs9t\" (UID: \"cc170cbd-c320-48b1-875e-24a52bf088d7\") " pod="openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t" Mar 18 09:13:12.844460 master-0 kubenswrapper[26053]: I0318 09:13:12.844431 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc170cbd-c320-48b1-875e-24a52bf088d7-lib-modules\") pod \"perf-node-gather-daemonset-xfs9t\" (UID: \"cc170cbd-c320-48b1-875e-24a52bf088d7\") " pod="openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t" Mar 18 09:13:12.844511 master-0 kubenswrapper[26053]: I0318 09:13:12.844333 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/cc170cbd-c320-48b1-875e-24a52bf088d7-proc\") pod \"perf-node-gather-daemonset-xfs9t\" (UID: \"cc170cbd-c320-48b1-875e-24a52bf088d7\") " pod="openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t" Mar 18 09:13:12.844511 master-0 kubenswrapper[26053]: I0318 09:13:12.844479 26053 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/cc170cbd-c320-48b1-875e-24a52bf088d7-podres\") pod \"perf-node-gather-daemonset-xfs9t\" (UID: \"cc170cbd-c320-48b1-875e-24a52bf088d7\") " pod="openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t" Mar 18 09:13:12.844590 master-0 kubenswrapper[26053]: I0318 09:13:12.844543 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/cc170cbd-c320-48b1-875e-24a52bf088d7-podres\") pod \"perf-node-gather-daemonset-xfs9t\" (UID: \"cc170cbd-c320-48b1-875e-24a52bf088d7\") " pod="openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t" Mar 18 09:13:12.867535 master-0 kubenswrapper[26053]: I0318 09:13:12.867469 26053 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffw79\" (UniqueName: \"kubernetes.io/projected/cc170cbd-c320-48b1-875e-24a52bf088d7-kube-api-access-ffw79\") pod \"perf-node-gather-daemonset-xfs9t\" (UID: \"cc170cbd-c320-48b1-875e-24a52bf088d7\") " pod="openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t" Mar 18 09:13:12.953627 master-0 kubenswrapper[26053]: I0318 09:13:12.953554 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-tx2pv_e88b021c-c810-4a68-aa48-d8666b52330e/kube-rbac-proxy/0.log" Mar 18 09:13:12.971892 master-0 kubenswrapper[26053]: I0318 09:13:12.971784 26053 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t" Mar 18 09:13:12.979723 master-0 kubenswrapper[26053]: I0318 09:13:12.979671 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-tx2pv_e88b021c-c810-4a68-aa48-d8666b52330e/cluster-autoscaler-operator/1.log" Mar 18 09:13:12.986621 master-0 kubenswrapper[26053]: I0318 09:13:12.986582 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-tx2pv_e88b021c-c810-4a68-aa48-d8666b52330e/cluster-autoscaler-operator/0.log" Mar 18 09:13:13.006862 master-0 kubenswrapper[26053]: I0318 09:13:13.006799 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mcd6d_eb8f3615-9e89-4b51-87a2-7d168c81adf3/cluster-baremetal-operator/2.log" Mar 18 09:13:13.007726 master-0 kubenswrapper[26053]: I0318 09:13:13.007682 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mcd6d_eb8f3615-9e89-4b51-87a2-7d168c81adf3/cluster-baremetal-operator/3.log" Mar 18 09:13:13.020549 master-0 kubenswrapper[26053]: I0318 09:13:13.020491 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mcd6d_eb8f3615-9e89-4b51-87a2-7d168c81adf3/baremetal-kube-rbac-proxy/0.log" Mar 18 09:13:13.043234 master-0 kubenswrapper[26053]: I0318 09:13:13.042769 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-s98kp_25781967-12ce-490e-94aa-9b9722f495da/control-plane-machine-set-operator/1.log" Mar 18 09:13:13.043234 master-0 kubenswrapper[26053]: I0318 09:13:13.042840 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-s98kp_25781967-12ce-490e-94aa-9b9722f495da/control-plane-machine-set-operator/0.log" Mar 18 09:13:13.060390 master-0 kubenswrapper[26053]: I0318 09:13:13.060351 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-n4t2h_fdb52116-9c55-4464-99c8-fc2e4559996b/kube-rbac-proxy/0.log" Mar 18 09:13:13.075282 master-0 kubenswrapper[26053]: I0318 09:13:13.075228 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-n4t2h_fdb52116-9c55-4464-99c8-fc2e4559996b/machine-api-operator/0.log" Mar 18 09:13:13.079271 master-0 kubenswrapper[26053]: I0318 09:13:13.078391 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-n4t2h_fdb52116-9c55-4464-99c8-fc2e4559996b/machine-api-operator/1.log" Mar 18 09:13:13.345023 master-0 kubenswrapper[26053]: I0318 09:13:13.344945 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fvl8d/must-gather-l28gx" event={"ID":"39c506c0-b1f5-46c3-8128-0f0dd0be9b20","Type":"ContainerStarted","Data":"1ec259ff28cfa0ec29762d021d5de83313d23915b060bda662d893bcaba76f61"} Mar 18 09:13:13.345023 master-0 kubenswrapper[26053]: I0318 09:13:13.345007 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fvl8d/must-gather-l28gx" event={"ID":"39c506c0-b1f5-46c3-8128-0f0dd0be9b20","Type":"ContainerStarted","Data":"ddae641cc4d1c69b907823c596c783badf49739f1ac5b173a74337ac2fce1f13"} Mar 18 09:13:13.374243 master-0 kubenswrapper[26053]: I0318 09:13:13.374082 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fvl8d/must-gather-l28gx" podStartSLOduration=2.447664486 podStartE2EDuration="11.374065458s" podCreationTimestamp="2026-03-18 09:13:02 +0000 UTC" firstStartedPulling="2026-03-18 09:13:03.276244763 +0000 UTC m=+570.769596174" lastFinishedPulling="2026-03-18 09:13:12.202645755 +0000 UTC m=+579.695997146" observedRunningTime="2026-03-18 09:13:13.366386615 +0000 UTC m=+580.859737996" watchObservedRunningTime="2026-03-18 09:13:13.374065458 +0000 UTC m=+580.867416839" Mar 18 09:13:13.393857 master-0 kubenswrapper[26053]: I0318 09:13:13.393742 26053 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t"] Mar 18 09:13:13.398196 master-0 kubenswrapper[26053]: W0318 09:13:13.398134 26053 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcc170cbd_c320_48b1_875e_24a52bf088d7.slice/crio-dc9d2cb3da3974ea5d46df5a9858c05d14427e20dbe726d515b115f444c9588b WatchSource:0}: Error finding container dc9d2cb3da3974ea5d46df5a9858c05d14427e20dbe726d515b115f444c9588b: Status 404 returned error can't find the container with id dc9d2cb3da3974ea5d46df5a9858c05d14427e20dbe726d515b115f444c9588b Mar 18 09:13:13.875252 master-0 kubenswrapper[26053]: I0318 09:13:13.875188 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vwqc4_94e2a8f0-2c2e-43da-9fa9-69edfcd77830/cluster-cloud-controller-manager/0.log" Mar 18 09:13:13.876037 master-0 kubenswrapper[26053]: I0318 09:13:13.876003 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vwqc4_94e2a8f0-2c2e-43da-9fa9-69edfcd77830/cluster-cloud-controller-manager/1.log" Mar 18 09:13:13.894006 master-0 kubenswrapper[26053]: I0318 09:13:13.893887 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vwqc4_94e2a8f0-2c2e-43da-9fa9-69edfcd77830/config-sync-controllers/0.log" Mar 18 09:13:13.894191 master-0 kubenswrapper[26053]: I0318 09:13:13.894118 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vwqc4_94e2a8f0-2c2e-43da-9fa9-69edfcd77830/config-sync-controllers/1.log" Mar 18 09:13:13.908873 master-0 kubenswrapper[26053]: I0318 09:13:13.908814 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vwqc4_94e2a8f0-2c2e-43da-9fa9-69edfcd77830/kube-rbac-proxy/0.log" Mar 18 09:13:14.353810 master-0 kubenswrapper[26053]: I0318 09:13:14.353745 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t" event={"ID":"cc170cbd-c320-48b1-875e-24a52bf088d7","Type":"ContainerStarted","Data":"bfab7e045ef20535e2c687899084e4bebca0814bed6b70a01d422bef6e472637"} Mar 18 09:13:14.353810 master-0 kubenswrapper[26053]: I0318 09:13:14.353807 26053 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t" event={"ID":"cc170cbd-c320-48b1-875e-24a52bf088d7","Type":"ContainerStarted","Data":"dc9d2cb3da3974ea5d46df5a9858c05d14427e20dbe726d515b115f444c9588b"} Mar 18 09:13:14.369557 master-0 kubenswrapper[26053]: I0318 09:13:14.369482 26053 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t" podStartSLOduration=2.369461033 podStartE2EDuration="2.369461033s" podCreationTimestamp="2026-03-18 09:13:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-18 09:13:14.367288009 +0000 UTC m=+581.860639390" watchObservedRunningTime="2026-03-18 09:13:14.369461033 +0000 UTC m=+581.862812414" Mar 18 09:13:14.771643 master-0 kubenswrapper[26053]: I0318 09:13:14.771587 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-tx2pv_e88b021c-c810-4a68-aa48-d8666b52330e/kube-rbac-proxy/0.log" Mar 18 09:13:14.784328 master-0 kubenswrapper[26053]: I0318 09:13:14.784276 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-tx2pv_e88b021c-c810-4a68-aa48-d8666b52330e/cluster-autoscaler-operator/1.log" Mar 18 09:13:14.785684 master-0 kubenswrapper[26053]: I0318 09:13:14.785639 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-tx2pv_e88b021c-c810-4a68-aa48-d8666b52330e/cluster-autoscaler-operator/0.log" Mar 18 09:13:14.792896 master-0 kubenswrapper[26053]: I0318 09:13:14.792860 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mcd6d_eb8f3615-9e89-4b51-87a2-7d168c81adf3/cluster-baremetal-operator/2.log" Mar 18 09:13:14.793509 master-0 kubenswrapper[26053]: I0318 09:13:14.793472 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mcd6d_eb8f3615-9e89-4b51-87a2-7d168c81adf3/cluster-baremetal-operator/3.log" Mar 18 09:13:14.800127 master-0 kubenswrapper[26053]: I0318 09:13:14.800099 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mcd6d_eb8f3615-9e89-4b51-87a2-7d168c81adf3/baremetal-kube-rbac-proxy/0.log" Mar 18 09:13:14.808284 master-0 kubenswrapper[26053]: I0318 09:13:14.808246 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-s98kp_25781967-12ce-490e-94aa-9b9722f495da/control-plane-machine-set-operator/1.log" Mar 18 09:13:14.808501 master-0 kubenswrapper[26053]: I0318 09:13:14.808471 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-s98kp_25781967-12ce-490e-94aa-9b9722f495da/control-plane-machine-set-operator/0.log" Mar 18 09:13:14.821162 master-0 kubenswrapper[26053]: I0318 09:13:14.821114 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-n4t2h_fdb52116-9c55-4464-99c8-fc2e4559996b/kube-rbac-proxy/0.log" Mar 18 09:13:14.828644 master-0 kubenswrapper[26053]: I0318 09:13:14.828597 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-n4t2h_fdb52116-9c55-4464-99c8-fc2e4559996b/machine-api-operator/0.log" Mar 18 09:13:14.831906 master-0 kubenswrapper[26053]: I0318 09:13:14.831863 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-n4t2h_fdb52116-9c55-4464-99c8-fc2e4559996b/machine-api-operator/1.log" Mar 18 09:13:15.005465 master-0 kubenswrapper[26053]: I0318 09:13:15.005415 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-744f9dbf77-9xqgw_a0cd1cf7-be6f-4baf-8761-69c693476de9/kube-rbac-proxy/0.log" Mar 18 09:13:15.024047 master-0 kubenswrapper[26053]: I0318 09:13:15.023925 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-744f9dbf77-9xqgw_a0cd1cf7-be6f-4baf-8761-69c693476de9/cloud-credential-operator/0.log" Mar 18 09:13:15.030020 master-0 kubenswrapper[26053]: I0318 09:13:15.029980 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-744f9dbf77-9xqgw_a0cd1cf7-be6f-4baf-8761-69c693476de9/cloud-credential-operator/1.log" Mar 18 09:13:15.359342 master-0 kubenswrapper[26053]: I0318 09:13:15.359224 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t" Mar 18 09:13:16.178033 master-0 kubenswrapper[26053]: I0318 09:13:16.177993 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-whh6r_95143c61-6f91-4cd4-9411-31c2fb75d4d0/openshift-config-operator/2.log" Mar 18 09:13:16.190443 master-0 kubenswrapper[26053]: I0318 09:13:16.190377 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-whh6r_95143c61-6f91-4cd4-9411-31c2fb75d4d0/openshift-config-operator/3.log" Mar 18 09:13:16.202079 master-0 kubenswrapper[26053]: I0318 09:13:16.202025 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-whh6r_95143c61-6f91-4cd4-9411-31c2fb75d4d0/openshift-api/0.log" Mar 18 09:13:16.795164 master-0 kubenswrapper[26053]: I0318 09:13:16.795120 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-mtpcs_8dc1b108-349c-48ab-a6e5-5943067ced62/console-operator/0.log" Mar 18 09:13:17.214655 master-0 kubenswrapper[26053]: I0318 09:13:17.214587 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-865c46fcb5-r7nsh_9affd559-9165-4444-90bd-a29ffce19091/console/0.log" Mar 18 09:13:17.231029 master-0 kubenswrapper[26053]: I0318 09:13:17.230958 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-66b8ffb895-bfrtz_bbedaed5-a2a1-4853-8b60-0baf3d1b143d/download-server/0.log" Mar 18 09:13:17.778277 master-0 kubenswrapper[26053]: I0318 09:13:17.778241 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-7d87854d6-9f7lz_2a864188-ada6-4ec2-bf9f-72dab210f0ce/cluster-storage-operator/1.log" Mar 18 09:13:17.778916 master-0 kubenswrapper[26053]: I0318 09:13:17.778902 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-7d87854d6-9f7lz_2a864188-ada6-4ec2-bf9f-72dab210f0ce/cluster-storage-operator/0.log" Mar 18 09:13:17.793903 master-0 kubenswrapper[26053]: I0318 09:13:17.793863 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qnc62_4e919445-81d0-4663-8941-f596d8121305/snapshot-controller/4.log" Mar 18 09:13:17.794718 master-0 kubenswrapper[26053]: I0318 09:13:17.794687 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-qnc62_4e919445-81d0-4663-8941-f596d8121305/snapshot-controller/5.log" Mar 18 09:13:17.822155 master-0 kubenswrapper[26053]: I0318 09:13:17.822100 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5f5d689c6b-lhcpp_c5c995cf-40a0-4cd6-87fa-96a522f7bc57/csi-snapshot-controller-operator/0.log" Mar 18 09:13:17.823860 master-0 kubenswrapper[26053]: I0318 09:13:17.823824 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5f5d689c6b-lhcpp_c5c995cf-40a0-4cd6-87fa-96a522f7bc57/csi-snapshot-controller-operator/1.log" Mar 18 09:13:18.289520 master-0 kubenswrapper[26053]: I0318 09:13:18.289424 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-9c5679d8f-2649q_4192ea44-a38c-4b70-93c3-8070da2ffe2f/dns-operator/0.log" Mar 18 09:13:18.299643 master-0 kubenswrapper[26053]: I0318 09:13:18.299606 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-9c5679d8f-2649q_4192ea44-a38c-4b70-93c3-8070da2ffe2f/kube-rbac-proxy/0.log" Mar 18 09:13:18.682254 master-0 kubenswrapper[26053]: I0318 09:13:18.682205 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-pj485_b2588f5c-327c-49cc-8cfb-0cce1ad758d5/dns/0.log" Mar 18 09:13:18.704245 master-0 kubenswrapper[26053]: I0318 09:13:18.704198 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-pj485_b2588f5c-327c-49cc-8cfb-0cce1ad758d5/kube-rbac-proxy/0.log" Mar 18 09:13:18.744320 master-0 kubenswrapper[26053]: I0318 09:13:18.744271 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-thqlt_c5e43736-33c3-4949-98ca-971332541d64/dns-node-resolver/0.log" Mar 18 09:13:19.229366 master-0 kubenswrapper[26053]: I0318 09:13:19.229301 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-f2nfl_bb6ef4c4-bff3-4559-8e42-582bbd668b7c/etcd-operator/1.log" Mar 18 09:13:19.234452 master-0 kubenswrapper[26053]: I0318 09:13:19.234367 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-f2nfl_bb6ef4c4-bff3-4559-8e42-582bbd668b7c/etcd-operator/2.log" Mar 18 09:13:19.664494 master-0 kubenswrapper[26053]: I0318 09:13:19.664356 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcdctl/0.log" Mar 18 09:13:19.897364 master-0 kubenswrapper[26053]: I0318 09:13:19.897313 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd/0.log" Mar 18 09:13:19.910194 master-0 kubenswrapper[26053]: I0318 09:13:19.910158 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-metrics/0.log" Mar 18 09:13:19.919864 master-0 kubenswrapper[26053]: I0318 09:13:19.919771 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-readyz/0.log" Mar 18 09:13:19.931171 master-0 kubenswrapper[26053]: I0318 09:13:19.931146 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-rev/0.log" Mar 18 09:13:19.942618 master-0 kubenswrapper[26053]: I0318 09:13:19.942599 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/setup/0.log" Mar 18 09:13:19.954636 master-0 kubenswrapper[26053]: I0318 09:13:19.954616 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-ensure-env-vars/0.log" Mar 18 09:13:19.965389 master-0 kubenswrapper[26053]: I0318 09:13:19.965356 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-resources-copy/0.log" Mar 18 09:13:20.006360 master-0 kubenswrapper[26053]: I0318 09:13:20.006303 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_c393a935-1821-4742-b1bb-0ee52ada5434/installer/0.log" Mar 18 09:13:20.047495 master-0 kubenswrapper[26053]: I0318 09:13:20.047433 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_08e4bcfe-d6ca-4799-9431-682673fe7380/installer/0.log" Mar 18 09:13:20.591644 master-0 kubenswrapper[26053]: I0318 09:13:20.591494 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-5549dc66cb-c4lgf_6c56e1ac-8752-4e46-8692-93716087f0e0/cluster-image-registry-operator/0.log" Mar 18 09:13:20.595561 master-0 kubenswrapper[26053]: I0318 09:13:20.595443 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-5549dc66cb-c4lgf_6c56e1ac-8752-4e46-8692-93716087f0e0/cluster-image-registry-operator/1.log" Mar 18 09:13:20.613784 master-0 kubenswrapper[26053]: I0318 09:13:20.613706 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-lds2c_cb02136a-629f-450c-bd13-4287849188c6/node-ca/0.log" Mar 18 09:13:21.107644 master-0 kubenswrapper[26053]: I0318 09:13:21.107593 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-4cxfh_bf7a3329-a04c-4b58-9364-b907c00cbe08/ingress-operator/4.log" Mar 18 09:13:21.115536 master-0 kubenswrapper[26053]: I0318 09:13:21.115491 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-4cxfh_bf7a3329-a04c-4b58-9364-b907c00cbe08/ingress-operator/5.log" Mar 18 09:13:21.127598 master-0 kubenswrapper[26053]: I0318 09:13:21.127530 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-4cxfh_bf7a3329-a04c-4b58-9364-b907c00cbe08/kube-rbac-proxy/0.log" Mar 18 09:13:21.546016 master-0 kubenswrapper[26053]: I0318 09:13:21.545957 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-226gc_9d66a9b2-7f9c-45bd-a793-b2ce9cd571cd/serve-healthcheck-canary/0.log" Mar 18 09:13:21.958986 master-0 kubenswrapper[26053]: I0318 09:13:21.958938 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-68bf6ff9d6-89rtc_f918d08d-df7c-4e8d-85ba-1c92d766db16/insights-operator/0.log" Mar 18 09:13:22.981675 master-0 kubenswrapper[26053]: I0318 09:13:22.981624 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_1ba3504d-c2ce-407f-b0e6-14582e17560e/alertmanager/0.log" Mar 18 09:13:22.990486 master-0 kubenswrapper[26053]: I0318 09:13:22.990424 26053 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-fvl8d/perf-node-gather-daemonset-xfs9t" Mar 18 09:13:22.996938 master-0 kubenswrapper[26053]: I0318 09:13:22.996874 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_1ba3504d-c2ce-407f-b0e6-14582e17560e/config-reloader/0.log" Mar 18 09:13:23.017363 master-0 kubenswrapper[26053]: I0318 09:13:23.017313 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_1ba3504d-c2ce-407f-b0e6-14582e17560e/kube-rbac-proxy-web/0.log" Mar 18 09:13:23.042373 master-0 kubenswrapper[26053]: I0318 09:13:23.042322 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_1ba3504d-c2ce-407f-b0e6-14582e17560e/kube-rbac-proxy/0.log" Mar 18 09:13:23.058668 master-0 kubenswrapper[26053]: I0318 09:13:23.058411 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_1ba3504d-c2ce-407f-b0e6-14582e17560e/kube-rbac-proxy-metric/0.log" Mar 18 09:13:23.073740 master-0 kubenswrapper[26053]: I0318 09:13:23.073695 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_1ba3504d-c2ce-407f-b0e6-14582e17560e/prom-label-proxy/0.log" Mar 18 09:13:23.095189 master-0 kubenswrapper[26053]: I0318 09:13:23.095135 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_1ba3504d-c2ce-407f-b0e6-14582e17560e/init-config-reloader/0.log" Mar 18 09:13:23.138223 master-0 kubenswrapper[26053]: I0318 09:13:23.138177 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-58845fbb57-8vfjr_09269324-c908-474d-818f-5cd49406f1e2/cluster-monitoring-operator/0.log" Mar 18 09:13:23.151319 master-0 kubenswrapper[26053]: I0318 09:13:23.151236 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-nbkgf_15798f4d-8bcc-4e24-bb18-8dff1f4edf59/kube-state-metrics/0.log" Mar 18 09:13:23.162884 master-0 kubenswrapper[26053]: I0318 09:13:23.162841 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-nbkgf_15798f4d-8bcc-4e24-bb18-8dff1f4edf59/kube-rbac-proxy-main/0.log" Mar 18 09:13:23.174923 master-0 kubenswrapper[26053]: I0318 09:13:23.174872 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-nbkgf_15798f4d-8bcc-4e24-bb18-8dff1f4edf59/kube-rbac-proxy-self/0.log" Mar 18 09:13:23.192399 master-0 kubenswrapper[26053]: I0318 09:13:23.192325 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_metrics-server-599f97d97f-6zmlx_876181ab-b5e4-4d9d-aae8-710a9e7ad213/metrics-server/0.log" Mar 18 09:13:23.204172 master-0 kubenswrapper[26053]: I0318 09:13:23.204111 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-7dfd446df6-76mgq_a5751f72-30f7-439b-a1de-af588611984c/monitoring-plugin/0.log" Mar 18 09:13:23.222803 master-0 kubenswrapper[26053]: I0318 09:13:23.222750 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-kp8pg_599418d3-6afa-46ab-9afa-659134f7ac94/node-exporter/0.log" Mar 18 09:13:23.232936 master-0 kubenswrapper[26053]: I0318 09:13:23.232841 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-kp8pg_599418d3-6afa-46ab-9afa-659134f7ac94/kube-rbac-proxy/0.log" Mar 18 09:13:23.247664 master-0 kubenswrapper[26053]: I0318 09:13:23.247615 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-kp8pg_599418d3-6afa-46ab-9afa-659134f7ac94/init-textfile/0.log" Mar 18 09:13:23.267860 master-0 kubenswrapper[26053]: I0318 09:13:23.267811 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-rm78n_2b59dbf5-0a61-4981-aed3-e73550615c4a/kube-rbac-proxy-main/0.log" Mar 18 09:13:23.288228 master-0 kubenswrapper[26053]: I0318 09:13:23.288173 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-rm78n_2b59dbf5-0a61-4981-aed3-e73550615c4a/kube-rbac-proxy-self/0.log" Mar 18 09:13:23.308365 master-0 kubenswrapper[26053]: I0318 09:13:23.308309 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-rm78n_2b59dbf5-0a61-4981-aed3-e73550615c4a/openshift-state-metrics/0.log" Mar 18 09:13:23.363884 master-0 kubenswrapper[26053]: I0318 09:13:23.363796 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2e6ee2ab-ba60-4663-90ab-10035e03107a/prometheus/0.log" Mar 18 09:13:23.375430 master-0 kubenswrapper[26053]: I0318 09:13:23.375380 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2e6ee2ab-ba60-4663-90ab-10035e03107a/config-reloader/0.log" Mar 18 09:13:23.387398 master-0 kubenswrapper[26053]: I0318 09:13:23.387349 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2e6ee2ab-ba60-4663-90ab-10035e03107a/thanos-sidecar/0.log" Mar 18 09:13:23.397802 master-0 kubenswrapper[26053]: I0318 09:13:23.397719 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2e6ee2ab-ba60-4663-90ab-10035e03107a/kube-rbac-proxy-web/0.log" Mar 18 09:13:23.413045 master-0 kubenswrapper[26053]: I0318 09:13:23.412988 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2e6ee2ab-ba60-4663-90ab-10035e03107a/kube-rbac-proxy/0.log" Mar 18 09:13:23.426880 master-0 kubenswrapper[26053]: I0318 09:13:23.426832 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2e6ee2ab-ba60-4663-90ab-10035e03107a/kube-rbac-proxy-thanos/0.log" Mar 18 09:13:23.451021 master-0 kubenswrapper[26053]: I0318 09:13:23.450965 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2e6ee2ab-ba60-4663-90ab-10035e03107a/init-config-reloader/0.log" Mar 18 09:13:23.470081 master-0 kubenswrapper[26053]: I0318 09:13:23.470030 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-6c8df6d4b-rqgh5_8683c8c6-3a77-4b46-8898-142f9781b49c/prometheus-operator/0.log" Mar 18 09:13:23.482279 master-0 kubenswrapper[26053]: I0318 09:13:23.482229 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-6c8df6d4b-rqgh5_8683c8c6-3a77-4b46-8898-142f9781b49c/kube-rbac-proxy/0.log" Mar 18 09:13:23.498106 master-0 kubenswrapper[26053]: I0318 09:13:23.498003 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-69c6b55594-4jrzp_cdf1c657-a9dc-455a-b2fd-27a518bc5199/prometheus-operator-admission-webhook/0.log" Mar 18 09:13:23.518008 master-0 kubenswrapper[26053]: I0318 09:13:23.517959 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-55b7f8bbf6-nj5q5_db4437ea-0a1e-478b-a9fe-a06c182f83a1/telemeter-client/0.log" Mar 18 09:13:23.530352 master-0 kubenswrapper[26053]: I0318 09:13:23.530305 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-55b7f8bbf6-nj5q5_db4437ea-0a1e-478b-a9fe-a06c182f83a1/reload/0.log" Mar 18 09:13:23.542351 master-0 kubenswrapper[26053]: I0318 09:13:23.542295 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-55b7f8bbf6-nj5q5_db4437ea-0a1e-478b-a9fe-a06c182f83a1/kube-rbac-proxy/0.log" Mar 18 09:13:23.573339 master-0 kubenswrapper[26053]: I0318 09:13:23.573294 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5bc4ddd65f-jtdvg_6706b96f-9bc3-4664-9fdc-2c0693ddf787/thanos-query/0.log" Mar 18 09:13:23.587045 master-0 kubenswrapper[26053]: I0318 09:13:23.587003 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5bc4ddd65f-jtdvg_6706b96f-9bc3-4664-9fdc-2c0693ddf787/kube-rbac-proxy-web/0.log" Mar 18 09:13:23.597942 master-0 kubenswrapper[26053]: I0318 09:13:23.597820 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5bc4ddd65f-jtdvg_6706b96f-9bc3-4664-9fdc-2c0693ddf787/kube-rbac-proxy/0.log" Mar 18 09:13:23.616192 master-0 kubenswrapper[26053]: I0318 09:13:23.616149 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5bc4ddd65f-jtdvg_6706b96f-9bc3-4664-9fdc-2c0693ddf787/prom-label-proxy/0.log" Mar 18 09:13:23.628298 master-0 kubenswrapper[26053]: I0318 09:13:23.628252 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5bc4ddd65f-jtdvg_6706b96f-9bc3-4664-9fdc-2c0693ddf787/kube-rbac-proxy-rules/0.log" Mar 18 09:13:23.643205 master-0 kubenswrapper[26053]: I0318 09:13:23.643143 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5bc4ddd65f-jtdvg_6706b96f-9bc3-4664-9fdc-2c0693ddf787/kube-rbac-proxy-metrics/0.log" Mar 18 09:13:25.312011 master-0 kubenswrapper[26053]: I0318 09:13:25.311913 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-9s8lp_1deb139f-1903-417e-835c-28abdd156cdb/cluster-node-tuning-operator/1.log" Mar 18 09:13:25.312798 master-0 kubenswrapper[26053]: I0318 09:13:25.312391 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-9s8lp_1deb139f-1903-417e-835c-28abdd156cdb/cluster-node-tuning-operator/0.log" Mar 18 09:13:25.334173 master-0 kubenswrapper[26053]: I0318 09:13:25.334126 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-84qxz_cda44dd8-895a-4eab-bedc-83f38efa2482/tuned/0.log" Mar 18 09:13:26.166310 master-0 kubenswrapper[26053]: I0318 09:13:26.166174 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-pp4r9_65cff83a-8d8f-4e4f-96ef-99941c29ba53/kube-apiserver-operator/2.log" Mar 18 09:13:26.195848 master-0 kubenswrapper[26053]: I0318 09:13:26.195777 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-pp4r9_65cff83a-8d8f-4e4f-96ef-99941c29ba53/kube-apiserver-operator/3.log" Mar 18 09:13:26.289440 master-0 kubenswrapper[26053]: I0318 09:13:26.289386 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/sushy-emulator_nova-console-poller-58b4c9589-b98wt_77c1e3a8-37e4-4c06-b3f8-16aa75ae2665/console-poller-a0b25cc8-8476-4206-ab9f-2ab1a54ffdc6/0.log" Mar 18 09:13:26.298218 master-0 kubenswrapper[26053]: I0318 09:13:26.298178 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/sushy-emulator_nova-console-poller-58b4c9589-b98wt_77c1e3a8-37e4-4c06-b3f8-16aa75ae2665/console-poller-ee26d2f3-c6bf-4726-ae18-3d0acbd89c8f/0.log" Mar 18 09:13:26.313475 master-0 kubenswrapper[26053]: I0318 09:13:26.313435 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/sushy-emulator_nova-console-recorder-75f56d88f-cbf8h_3c30b8c4-b79f-417d-9d8d-265040e7c12a/console-recorder-a0b25cc8-8476-4206-ab9f-2ab1a54ffdc6/0.log" Mar 18 09:13:26.322503 master-0 kubenswrapper[26053]: I0318 09:13:26.322475 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/sushy-emulator_nova-console-recorder-75f56d88f-cbf8h_3c30b8c4-b79f-417d-9d8d-265040e7c12a/console-recorder-ee26d2f3-c6bf-4726-ae18-3d0acbd89c8f/0.log" Mar 18 09:13:26.339482 master-0 kubenswrapper[26053]: I0318 09:13:26.339402 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/sushy-emulator_sushy-emulator-59477995f9-x2ftq_955d8125-124d-461e-9742-93d11cbb85ff/sushy-emulator/0.log" Mar 18 09:13:26.816772 master-0 kubenswrapper[26053]: I0318 09:13:26.816722 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_38b830ff-8938-4f21-8977-c29a19c85afb/installer/0.log" Mar 18 09:13:26.833443 master-0 kubenswrapper[26053]: I0318 09:13:26.833360 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_93298cb2-d669-49ea-92be-8891f07ab1c5/installer/0.log" Mar 18 09:13:26.856012 master-0 kubenswrapper[26053]: I0318 09:13:26.855937 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-2-master-0_c46fcf39-9167-4ec2-9d2c-0a622bc69d13/installer/0.log" Mar 18 09:13:26.875748 master-0 kubenswrapper[26053]: I0318 09:13:26.875678 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_6030c175-df60-4af1-85b9-78a2cdc9f320/installer/0.log" Mar 18 09:13:26.895902 master-0 kubenswrapper[26053]: I0318 09:13:26.895855 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-6-master-0_1723c159-3187-46be-89bb-a529ca0c54db/installer/0.log" Mar 18 09:13:26.988168 master-0 kubenswrapper[26053]: I0318 09:13:26.988112 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_274c4bebf95a655851b2cf276fe43ef7/kube-apiserver/0.log" Mar 18 09:13:26.998022 master-0 kubenswrapper[26053]: I0318 09:13:26.997989 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_274c4bebf95a655851b2cf276fe43ef7/kube-apiserver-cert-syncer/0.log" Mar 18 09:13:27.010229 master-0 kubenswrapper[26053]: I0318 09:13:27.010182 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_274c4bebf95a655851b2cf276fe43ef7/kube-apiserver-cert-regeneration-controller/0.log" Mar 18 09:13:27.029928 master-0 kubenswrapper[26053]: I0318 09:13:27.029850 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_274c4bebf95a655851b2cf276fe43ef7/kube-apiserver-insecure-readyz/0.log" Mar 18 09:13:27.044270 master-0 kubenswrapper[26053]: I0318 09:13:27.044230 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_274c4bebf95a655851b2cf276fe43ef7/kube-apiserver-check-endpoints/0.log" Mar 18 09:13:27.056064 master-0 kubenswrapper[26053]: I0318 09:13:27.056012 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_274c4bebf95a655851b2cf276fe43ef7/setup/0.log" Mar 18 09:13:27.705926 master-0 kubenswrapper[26053]: I0318 09:13:27.705867 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-vbxdw_411d544f-e105-44f0-927a-f61406b3f070/kube-rbac-proxy/0.log" Mar 18 09:13:27.723823 master-0 kubenswrapper[26053]: I0318 09:13:27.723760 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-vbxdw_411d544f-e105-44f0-927a-f61406b3f070/manager/2.log" Mar 18 09:13:27.724406 master-0 kubenswrapper[26053]: I0318 09:13:27.724382 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-vbxdw_411d544f-e105-44f0-927a-f61406b3f070/manager/1.log" Mar 18 09:13:28.158060 master-0 kubenswrapper[26053]: I0318 09:13:28.157908 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-68tmr_fdd2f1fd-1a94-4f4e-a275-b075f432f763/kube-multus-additional-cni-plugins/0.log" Mar 18 09:13:28.171285 master-0 kubenswrapper[26053]: I0318 09:13:28.171243 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-68tmr_fdd2f1fd-1a94-4f4e-a275-b075f432f763/egress-router-binary-copy/0.log" Mar 18 09:13:28.182649 master-0 kubenswrapper[26053]: I0318 09:13:28.182529 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-68tmr_fdd2f1fd-1a94-4f4e-a275-b075f432f763/cni-plugins/0.log" Mar 18 09:13:28.197819 master-0 kubenswrapper[26053]: I0318 09:13:28.197772 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-68tmr_fdd2f1fd-1a94-4f4e-a275-b075f432f763/bond-cni-plugin/0.log" Mar 18 09:13:28.213743 master-0 kubenswrapper[26053]: I0318 09:13:28.213698 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-68tmr_fdd2f1fd-1a94-4f4e-a275-b075f432f763/routeoverride-cni/0.log" Mar 18 09:13:28.225820 master-0 kubenswrapper[26053]: I0318 09:13:28.225772 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-68tmr_fdd2f1fd-1a94-4f4e-a275-b075f432f763/whereabouts-cni-bincopy/0.log" Mar 18 09:13:28.240091 master-0 kubenswrapper[26053]: I0318 09:13:28.240036 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-68tmr_fdd2f1fd-1a94-4f4e-a275-b075f432f763/whereabouts-cni/0.log" Mar 18 09:13:28.258609 master-0 kubenswrapper[26053]: I0318 09:13:28.258543 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-58c9f8fc64-hq2gr_2c108235-6537-4130-a858-6d38cd71e4fd/multus-admission-controller/0.log" Mar 18 09:13:28.271102 master-0 kubenswrapper[26053]: I0318 09:13:28.271066 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-58c9f8fc64-hq2gr_2c108235-6537-4130-a858-6d38cd71e4fd/kube-rbac-proxy/0.log" Mar 18 09:13:28.331590 master-0 kubenswrapper[26053]: I0318 09:13:28.321403 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h7vq8_af1fbcf2-d4de-4015-89fc-2565e855a04d/kube-multus/0.log" Mar 18 09:13:28.397048 master-0 kubenswrapper[26053]: I0318 09:13:28.397012 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-2xs9n_e48101ca-f356-45e3-93d7-4e17b8d8066c/network-metrics-daemon/0.log" Mar 18 09:13:28.405400 master-0 kubenswrapper[26053]: I0318 09:13:28.405336 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-2xs9n_e48101ca-f356-45e3-93d7-4e17b8d8066c/kube-rbac-proxy/0.log" Mar 18 09:13:28.954550 master-0 kubenswrapper[26053]: I0318 09:13:28.954468 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-4-master-0_ea4b43a1-e9cd-44e4-9c79-55c53146d9e8/installer/0.log" Mar 18 09:13:28.972972 master-0 kubenswrapper[26053]: I0318 09:13:28.972898 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-5-master-0_7dcc6db5-f20e-431f-9f0b-818bd3830f41/installer/0.log" Mar 18 09:13:28.995776 master-0 kubenswrapper[26053]: I0318 09:13:28.995692 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-6-master-0_e37bd95a-3bb3-44cc-9008-ac4a2fd9d7d4/installer/0.log" Mar 18 09:13:29.118211 master-0 kubenswrapper[26053]: I0318 09:13:29.117769 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_e665517aa7aaa407efaa6a71427f5785/kube-controller-manager/0.log" Mar 18 09:13:29.160482 master-0 kubenswrapper[26053]: I0318 09:13:29.159952 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_e665517aa7aaa407efaa6a71427f5785/cluster-policy-controller/0.log" Mar 18 09:13:29.173398 master-0 kubenswrapper[26053]: I0318 09:13:29.173351 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_e665517aa7aaa407efaa6a71427f5785/kube-controller-manager-cert-syncer/0.log" Mar 18 09:13:29.183589 master-0 kubenswrapper[26053]: I0318 09:13:29.183523 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_e665517aa7aaa407efaa6a71427f5785/kube-controller-manager-recovery-controller/0.log" Mar 18 09:13:29.198410 master-0 kubenswrapper[26053]: I0318 09:13:29.198373 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_revision-pruner-6-master-0_b427f955-9128-4b6d-a2f1-43297755dc0b/pruner/0.log" Mar 18 09:13:29.713594 master-0 kubenswrapper[26053]: I0318 09:13:29.713543 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-xlfrc_1df9560e-21f0-44fe-bb51-4bc0fde4a3ac/kube-controller-manager-operator/1.log" Mar 18 09:13:29.738657 master-0 kubenswrapper[26053]: I0318 09:13:29.738548 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-xlfrc_1df9560e-21f0-44fe-bb51-4bc0fde4a3ac/kube-controller-manager-operator/2.log" Mar 18 09:13:30.736775 master-0 kubenswrapper[26053]: I0318 09:13:30.736726 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_3253d87f-ae48-42cf-950f-f508a9b82d0d/installer/0.log" Mar 18 09:13:30.757365 master-0 kubenswrapper[26053]: I0318 09:13:30.757295 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_5ca7b84e-0aff-4526-948a-03492712ff8f/installer/0.log" Mar 18 09:13:30.777000 master-0 kubenswrapper[26053]: I0318 09:13:30.776951 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-retry-1-master-0_e2af879e-1465-40bf-bf72-30c7e89386a3/installer/0.log" Mar 18 09:13:30.802956 master-0 kubenswrapper[26053]: I0318 09:13:30.802156 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_315ae422-1357-4fce-a2f4-eb10aaaaae24/installer/0.log" Mar 18 09:13:30.839328 master-0 kubenswrapper[26053]: I0318 09:13:30.839273 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8e27b7d086edf5d2cf47b703574641d8/kube-scheduler/0.log" Mar 18 09:13:30.851144 master-0 kubenswrapper[26053]: I0318 09:13:30.850642 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8e27b7d086edf5d2cf47b703574641d8/kube-scheduler-cert-syncer/0.log" Mar 18 09:13:30.863520 master-0 kubenswrapper[26053]: I0318 09:13:30.863378 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8e27b7d086edf5d2cf47b703574641d8/kube-scheduler-recovery-controller/0.log" Mar 18 09:13:30.878952 master-0 kubenswrapper[26053]: I0318 09:13:30.878907 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8e27b7d086edf5d2cf47b703574641d8/wait-for-host-port/1.log" Mar 18 09:13:31.372690 master-0 kubenswrapper[26053]: I0318 09:13:31.372646 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-cpbdr_0f9ba06c-7a6b-4f46-a747-80b0a0b58600/kube-scheduler-operator-container/1.log" Mar 18 09:13:31.392798 master-0 kubenswrapper[26053]: I0318 09:13:31.392743 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-cpbdr_0f9ba06c-7a6b-4f46-a747-80b0a0b58600/kube-scheduler-operator-container/2.log" Mar 18 09:13:31.939532 master-0 kubenswrapper[26053]: I0318 09:13:31.939470 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator_migrator-8487694857-sbsqg_c6176328-5931-405b-8519-8e4bc83bedfb/migrator/0.log" Mar 18 09:13:31.957484 master-0 kubenswrapper[26053]: I0318 09:13:31.957427 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator_migrator-8487694857-sbsqg_c6176328-5931-405b-8519-8e4bc83bedfb/graceful-termination/0.log" Mar 18 09:13:32.281164 master-0 kubenswrapper[26053]: I0318 09:13:32.281120 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl_be2682e4-cb63-4102-a83e-ef28023e273a/kube-storage-version-migrator-operator/2.log" Mar 18 09:13:32.282957 master-0 kubenswrapper[26053]: I0318 09:13:32.282926 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-6bb5bfb6fd-l9wpl_be2682e4-cb63-4102-a83e-ef28023e273a/kube-storage-version-migrator-operator/3.log" Mar 18 09:13:32.809540 master-0 kubenswrapper[26053]: I0318 09:13:32.809491 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_1ba3504d-c2ce-407f-b0e6-14582e17560e/alertmanager/0.log" Mar 18 09:13:32.815910 master-0 kubenswrapper[26053]: I0318 09:13:32.815866 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_1ba3504d-c2ce-407f-b0e6-14582e17560e/config-reloader/0.log" Mar 18 09:13:32.823215 master-0 kubenswrapper[26053]: I0318 09:13:32.823170 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_1ba3504d-c2ce-407f-b0e6-14582e17560e/kube-rbac-proxy-web/0.log" Mar 18 09:13:32.836064 master-0 kubenswrapper[26053]: I0318 09:13:32.836014 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_1ba3504d-c2ce-407f-b0e6-14582e17560e/kube-rbac-proxy/0.log" Mar 18 09:13:32.844382 master-0 kubenswrapper[26053]: I0318 09:13:32.844336 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_1ba3504d-c2ce-407f-b0e6-14582e17560e/kube-rbac-proxy-metric/0.log" Mar 18 09:13:32.853209 master-0 kubenswrapper[26053]: I0318 09:13:32.853155 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_1ba3504d-c2ce-407f-b0e6-14582e17560e/prom-label-proxy/0.log" Mar 18 09:13:32.861383 master-0 kubenswrapper[26053]: I0318 09:13:32.861312 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_1ba3504d-c2ce-407f-b0e6-14582e17560e/init-config-reloader/0.log" Mar 18 09:13:32.876286 master-0 kubenswrapper[26053]: I0318 09:13:32.876215 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-r4mv6_cdcd27a4-6d46-47af-a14a-65f6501c10f0/kube-rbac-proxy/0.log" Mar 18 09:13:32.888632 master-0 kubenswrapper[26053]: I0318 09:13:32.888474 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-r4mv6_cdcd27a4-6d46-47af-a14a-65f6501c10f0/machine-approver-controller/0.log" Mar 18 09:13:32.888885 master-0 kubenswrapper[26053]: I0318 09:13:32.888755 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-r4mv6_cdcd27a4-6d46-47af-a14a-65f6501c10f0/machine-approver-controller/1.log" Mar 18 09:13:32.891155 master-0 kubenswrapper[26053]: I0318 09:13:32.890801 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-vbxdw_411d544f-e105-44f0-927a-f61406b3f070/kube-rbac-proxy/0.log" Mar 18 09:13:32.896057 master-0 kubenswrapper[26053]: I0318 09:13:32.896036 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-58845fbb57-8vfjr_09269324-c908-474d-818f-5cd49406f1e2/cluster-monitoring-operator/0.log" Mar 18 09:13:32.899665 master-0 kubenswrapper[26053]: I0318 09:13:32.899641 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-vbxdw_411d544f-e105-44f0-927a-f61406b3f070/manager/2.log" Mar 18 09:13:32.901222 master-0 kubenswrapper[26053]: I0318 09:13:32.900738 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-vbxdw_411d544f-e105-44f0-927a-f61406b3f070/manager/1.log" Mar 18 09:13:32.909714 master-0 kubenswrapper[26053]: I0318 09:13:32.907945 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-nbkgf_15798f4d-8bcc-4e24-bb18-8dff1f4edf59/kube-state-metrics/0.log" Mar 18 09:13:32.915048 master-0 kubenswrapper[26053]: I0318 09:13:32.915007 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-nbkgf_15798f4d-8bcc-4e24-bb18-8dff1f4edf59/kube-rbac-proxy-main/0.log" Mar 18 09:13:32.926699 master-0 kubenswrapper[26053]: I0318 09:13:32.926654 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-nbkgf_15798f4d-8bcc-4e24-bb18-8dff1f4edf59/kube-rbac-proxy-self/0.log" Mar 18 09:13:32.938923 master-0 kubenswrapper[26053]: I0318 09:13:32.938875 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_metrics-server-599f97d97f-6zmlx_876181ab-b5e4-4d9d-aae8-710a9e7ad213/metrics-server/0.log" Mar 18 09:13:32.951378 master-0 kubenswrapper[26053]: I0318 09:13:32.951339 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-7dfd446df6-76mgq_a5751f72-30f7-439b-a1de-af588611984c/monitoring-plugin/0.log" Mar 18 09:13:32.974741 master-0 kubenswrapper[26053]: I0318 09:13:32.974702 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-kp8pg_599418d3-6afa-46ab-9afa-659134f7ac94/node-exporter/0.log" Mar 18 09:13:32.986576 master-0 kubenswrapper[26053]: I0318 09:13:32.984987 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-kp8pg_599418d3-6afa-46ab-9afa-659134f7ac94/kube-rbac-proxy/0.log" Mar 18 09:13:32.995265 master-0 kubenswrapper[26053]: I0318 09:13:32.994223 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-kp8pg_599418d3-6afa-46ab-9afa-659134f7ac94/init-textfile/0.log" Mar 18 09:13:33.007310 master-0 kubenswrapper[26053]: I0318 09:13:33.007248 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-rm78n_2b59dbf5-0a61-4981-aed3-e73550615c4a/kube-rbac-proxy-main/0.log" Mar 18 09:13:33.019062 master-0 kubenswrapper[26053]: I0318 09:13:33.019000 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-rm78n_2b59dbf5-0a61-4981-aed3-e73550615c4a/kube-rbac-proxy-self/0.log" Mar 18 09:13:33.032213 master-0 kubenswrapper[26053]: I0318 09:13:33.032168 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-rm78n_2b59dbf5-0a61-4981-aed3-e73550615c4a/openshift-state-metrics/0.log" Mar 18 09:13:33.077497 master-0 kubenswrapper[26053]: I0318 09:13:33.077394 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2e6ee2ab-ba60-4663-90ab-10035e03107a/prometheus/0.log" Mar 18 09:13:33.082743 master-0 kubenswrapper[26053]: I0318 09:13:33.082701 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2e6ee2ab-ba60-4663-90ab-10035e03107a/config-reloader/0.log" Mar 18 09:13:33.090812 master-0 kubenswrapper[26053]: I0318 09:13:33.090786 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2e6ee2ab-ba60-4663-90ab-10035e03107a/thanos-sidecar/0.log" Mar 18 09:13:33.100188 master-0 kubenswrapper[26053]: I0318 09:13:33.100157 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2e6ee2ab-ba60-4663-90ab-10035e03107a/kube-rbac-proxy-web/0.log" Mar 18 09:13:33.108527 master-0 kubenswrapper[26053]: I0318 09:13:33.108478 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2e6ee2ab-ba60-4663-90ab-10035e03107a/kube-rbac-proxy/0.log" Mar 18 09:13:33.118810 master-0 kubenswrapper[26053]: I0318 09:13:33.118756 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2e6ee2ab-ba60-4663-90ab-10035e03107a/kube-rbac-proxy-thanos/0.log" Mar 18 09:13:33.134262 master-0 kubenswrapper[26053]: I0318 09:13:33.134224 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_2e6ee2ab-ba60-4663-90ab-10035e03107a/init-config-reloader/0.log" Mar 18 09:13:33.151987 master-0 kubenswrapper[26053]: I0318 09:13:33.151948 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-6c8df6d4b-rqgh5_8683c8c6-3a77-4b46-8898-142f9781b49c/prometheus-operator/0.log" Mar 18 09:13:33.158836 master-0 kubenswrapper[26053]: I0318 09:13:33.158791 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-6c8df6d4b-rqgh5_8683c8c6-3a77-4b46-8898-142f9781b49c/kube-rbac-proxy/0.log" Mar 18 09:13:33.172264 master-0 kubenswrapper[26053]: I0318 09:13:33.172233 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-69c6b55594-4jrzp_cdf1c657-a9dc-455a-b2fd-27a518bc5199/prometheus-operator-admission-webhook/0.log" Mar 18 09:13:33.186410 master-0 kubenswrapper[26053]: I0318 09:13:33.186372 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-55b7f8bbf6-nj5q5_db4437ea-0a1e-478b-a9fe-a06c182f83a1/telemeter-client/0.log" Mar 18 09:13:33.191046 master-0 kubenswrapper[26053]: I0318 09:13:33.191027 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-55b7f8bbf6-nj5q5_db4437ea-0a1e-478b-a9fe-a06c182f83a1/reload/0.log" Mar 18 09:13:33.204145 master-0 kubenswrapper[26053]: I0318 09:13:33.204100 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-55b7f8bbf6-nj5q5_db4437ea-0a1e-478b-a9fe-a06c182f83a1/kube-rbac-proxy/0.log" Mar 18 09:13:33.225322 master-0 kubenswrapper[26053]: I0318 09:13:33.225259 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5bc4ddd65f-jtdvg_6706b96f-9bc3-4664-9fdc-2c0693ddf787/thanos-query/0.log" Mar 18 09:13:33.236634 master-0 kubenswrapper[26053]: I0318 09:13:33.236559 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5bc4ddd65f-jtdvg_6706b96f-9bc3-4664-9fdc-2c0693ddf787/kube-rbac-proxy-web/0.log" Mar 18 09:13:33.259121 master-0 kubenswrapper[26053]: I0318 09:13:33.259005 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5bc4ddd65f-jtdvg_6706b96f-9bc3-4664-9fdc-2c0693ddf787/kube-rbac-proxy/0.log" Mar 18 09:13:33.278285 master-0 kubenswrapper[26053]: I0318 09:13:33.278090 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5bc4ddd65f-jtdvg_6706b96f-9bc3-4664-9fdc-2c0693ddf787/prom-label-proxy/0.log" Mar 18 09:13:33.290141 master-0 kubenswrapper[26053]: I0318 09:13:33.290010 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5bc4ddd65f-jtdvg_6706b96f-9bc3-4664-9fdc-2c0693ddf787/kube-rbac-proxy-rules/0.log" Mar 18 09:13:33.296078 master-0 kubenswrapper[26053]: I0318 09:13:33.296005 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5bc4ddd65f-jtdvg_6706b96f-9bc3-4664-9fdc-2c0693ddf787/kube-rbac-proxy-metrics/0.log" Mar 18 09:13:33.452050 master-0 kubenswrapper[26053]: I0318 09:13:33.451862 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 18 09:13:33.452912 master-0 kubenswrapper[26053]: I0318 09:13:33.452868 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/3.log" Mar 18 09:13:33.472796 master-0 kubenswrapper[26053]: I0318 09:13:33.472170 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/setup/0.log" Mar 18 09:13:33.491398 master-0 kubenswrapper[26053]: I0318 09:13:33.491324 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-controller-b4f87c5b9-prrnd_d7205eeb-912b-4c31-b08f-ed0b2a1319aa/machine-config-controller/1.log" Mar 18 09:13:33.494543 master-0 kubenswrapper[26053]: I0318 09:13:33.494500 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-controller-b4f87c5b9-prrnd_d7205eeb-912b-4c31-b08f-ed0b2a1319aa/machine-config-controller/0.log" Mar 18 09:13:33.508358 master-0 kubenswrapper[26053]: I0318 09:13:33.508292 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-controller-b4f87c5b9-prrnd_d7205eeb-912b-4c31-b08f-ed0b2a1319aa/kube-rbac-proxy/0.log" Mar 18 09:13:33.532277 master-0 kubenswrapper[26053]: I0318 09:13:33.532217 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-rhm2f_a7cf2cff-ca67-4cc6-99e7-99478ab89af4/machine-config-daemon/0.log" Mar 18 09:13:33.541933 master-0 kubenswrapper[26053]: I0318 09:13:33.541879 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-rhm2f_a7cf2cff-ca67-4cc6-99e7-99478ab89af4/kube-rbac-proxy/0.log" Mar 18 09:13:33.561950 master-0 kubenswrapper[26053]: I0318 09:13:33.561882 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-operator-84d549f6d5-vj84b_bef948b9-eef4-404b-9b49-6e4a2ceea73b/machine-config-operator/0.log" Mar 18 09:13:33.574339 master-0 kubenswrapper[26053]: I0318 09:13:33.574006 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-operator-84d549f6d5-vj84b_bef948b9-eef4-404b-9b49-6e4a2ceea73b/kube-rbac-proxy/0.log" Mar 18 09:13:33.592739 master-0 kubenswrapper[26053]: I0318 09:13:33.592667 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-rw7hw_14489ef7-8df3-4a3b-a137-3a78e89d425b/machine-config-server/0.log" Mar 18 09:13:34.326486 master-0 kubenswrapper[26053]: I0318 09:13:34.326433 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-tx2pv_e88b021c-c810-4a68-aa48-d8666b52330e/kube-rbac-proxy/0.log" Mar 18 09:13:34.343939 master-0 kubenswrapper[26053]: I0318 09:13:34.343881 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-tx2pv_e88b021c-c810-4a68-aa48-d8666b52330e/cluster-autoscaler-operator/1.log" Mar 18 09:13:34.344559 master-0 kubenswrapper[26053]: I0318 09:13:34.344510 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-tx2pv_e88b021c-c810-4a68-aa48-d8666b52330e/cluster-autoscaler-operator/0.log" Mar 18 09:13:34.354414 master-0 kubenswrapper[26053]: I0318 09:13:34.354358 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mcd6d_eb8f3615-9e89-4b51-87a2-7d168c81adf3/cluster-baremetal-operator/2.log" Mar 18 09:13:34.355327 master-0 kubenswrapper[26053]: I0318 09:13:34.355293 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mcd6d_eb8f3615-9e89-4b51-87a2-7d168c81adf3/cluster-baremetal-operator/3.log" Mar 18 09:13:34.361996 master-0 kubenswrapper[26053]: I0318 09:13:34.361948 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-mcd6d_eb8f3615-9e89-4b51-87a2-7d168c81adf3/baremetal-kube-rbac-proxy/0.log" Mar 18 09:13:34.372723 master-0 kubenswrapper[26053]: I0318 09:13:34.372681 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-s98kp_25781967-12ce-490e-94aa-9b9722f495da/control-plane-machine-set-operator/1.log" Mar 18 09:13:34.372918 master-0 kubenswrapper[26053]: I0318 09:13:34.372864 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-s98kp_25781967-12ce-490e-94aa-9b9722f495da/control-plane-machine-set-operator/0.log" Mar 18 09:13:34.401264 master-0 kubenswrapper[26053]: I0318 09:13:34.401201 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-n4t2h_fdb52116-9c55-4464-99c8-fc2e4559996b/kube-rbac-proxy/0.log" Mar 18 09:13:34.410405 master-0 kubenswrapper[26053]: I0318 09:13:34.410365 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-n4t2h_fdb52116-9c55-4464-99c8-fc2e4559996b/machine-api-operator/0.log" Mar 18 09:13:34.412280 master-0 kubenswrapper[26053]: I0318 09:13:34.412260 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-n4t2h_fdb52116-9c55-4464-99c8-fc2e4559996b/machine-api-operator/1.log" Mar 18 09:13:34.963484 master-0 kubenswrapper[26053]: I0318 09:13:34.963443 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-68tmr_fdd2f1fd-1a94-4f4e-a275-b075f432f763/kube-multus-additional-cni-plugins/0.log" Mar 18 09:13:34.971416 master-0 kubenswrapper[26053]: I0318 09:13:34.971370 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-68tmr_fdd2f1fd-1a94-4f4e-a275-b075f432f763/egress-router-binary-copy/0.log" Mar 18 09:13:34.978993 master-0 kubenswrapper[26053]: I0318 09:13:34.978944 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-68tmr_fdd2f1fd-1a94-4f4e-a275-b075f432f763/cni-plugins/0.log" Mar 18 09:13:34.986051 master-0 kubenswrapper[26053]: I0318 09:13:34.986015 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-68tmr_fdd2f1fd-1a94-4f4e-a275-b075f432f763/bond-cni-plugin/0.log" Mar 18 09:13:34.993159 master-0 kubenswrapper[26053]: I0318 09:13:34.993103 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-68tmr_fdd2f1fd-1a94-4f4e-a275-b075f432f763/routeoverride-cni/0.log" Mar 18 09:13:35.001222 master-0 kubenswrapper[26053]: I0318 09:13:35.001187 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-68tmr_fdd2f1fd-1a94-4f4e-a275-b075f432f763/whereabouts-cni-bincopy/0.log" Mar 18 09:13:35.009396 master-0 kubenswrapper[26053]: I0318 09:13:35.009346 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-68tmr_fdd2f1fd-1a94-4f4e-a275-b075f432f763/whereabouts-cni/0.log" Mar 18 09:13:35.019756 master-0 kubenswrapper[26053]: I0318 09:13:35.019713 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-58c9f8fc64-hq2gr_2c108235-6537-4130-a858-6d38cd71e4fd/multus-admission-controller/0.log" Mar 18 09:13:35.028016 master-0 kubenswrapper[26053]: I0318 09:13:35.027966 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-58c9f8fc64-hq2gr_2c108235-6537-4130-a858-6d38cd71e4fd/kube-rbac-proxy/0.log" Mar 18 09:13:35.095750 master-0 kubenswrapper[26053]: I0318 09:13:35.095703 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-h7vq8_af1fbcf2-d4de-4015-89fc-2565e855a04d/kube-multus/0.log" Mar 18 09:13:35.111921 master-0 kubenswrapper[26053]: I0318 09:13:35.111866 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-2xs9n_e48101ca-f356-45e3-93d7-4e17b8d8066c/network-metrics-daemon/0.log" Mar 18 09:13:35.117812 master-0 kubenswrapper[26053]: I0318 09:13:35.117764 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-2xs9n_e48101ca-f356-45e3-93d7-4e17b8d8066c/kube-rbac-proxy/0.log" Mar 18 09:13:35.593135 master-0 kubenswrapper[26053]: I0318 09:13:35.593084 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-9s8lp_1deb139f-1903-417e-835c-28abdd156cdb/cluster-node-tuning-operator/1.log" Mar 18 09:13:35.593744 master-0 kubenswrapper[26053]: I0318 09:13:35.593467 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-9s8lp_1deb139f-1903-417e-835c-28abdd156cdb/cluster-node-tuning-operator/0.log" Mar 18 09:13:35.605627 master-0 kubenswrapper[26053]: I0318 09:13:35.605549 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-84qxz_cda44dd8-895a-4eab-bedc-83f38efa2482/tuned/0.log" Mar 18 09:13:36.785058 master-0 kubenswrapper[26053]: I0318 09:13:36.784998 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5x8lj_f2fcd92f-0a58-4c87-8213-715453486aca/registry-server/0.log" Mar 18 09:13:36.796165 master-0 kubenswrapper[26053]: I0318 09:13:36.796086 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5x8lj_f2fcd92f-0a58-4c87-8213-715453486aca/extract-utilities/0.log" Mar 18 09:13:36.817561 master-0 kubenswrapper[26053]: I0318 09:13:36.817511 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5x8lj_f2fcd92f-0a58-4c87-8213-715453486aca/extract-content/0.log" Mar 18 09:13:37.147828 master-0 kubenswrapper[26053]: I0318 09:13:37.147715 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nfdcz_1c322813-b574-4b46-b760-208ccecd01a5/registry-server/0.log" Mar 18 09:13:37.155666 master-0 kubenswrapper[26053]: I0318 09:13:37.155624 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nfdcz_1c322813-b574-4b46-b760-208ccecd01a5/extract-utilities/0.log" Mar 18 09:13:37.165961 master-0 kubenswrapper[26053]: I0318 09:13:37.165716 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-nfdcz_1c322813-b574-4b46-b760-208ccecd01a5/extract-content/0.log" Mar 18 09:13:37.184160 master-0 kubenswrapper[26053]: I0318 09:13:37.184109 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-89ccd998f-m862c_ca9d4694-8675-47c5-819f-89bba9dcdc0f/marketplace-operator/0.log" Mar 18 09:13:37.185626 master-0 kubenswrapper[26053]: I0318 09:13:37.185592 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-89ccd998f-m862c_ca9d4694-8675-47c5-819f-89bba9dcdc0f/marketplace-operator/1.log" Mar 18 09:13:37.258058 master-0 kubenswrapper[26053]: I0318 09:13:37.258010 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2gpbt_bf5fd4cc-959e-4878-82e9-b0f90dba6553/registry-server/0.log" Mar 18 09:13:37.268424 master-0 kubenswrapper[26053]: I0318 09:13:37.268375 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2gpbt_bf5fd4cc-959e-4878-82e9-b0f90dba6553/extract-utilities/0.log" Mar 18 09:13:37.276915 master-0 kubenswrapper[26053]: I0318 09:13:37.276768 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2gpbt_bf5fd4cc-959e-4878-82e9-b0f90dba6553/extract-content/0.log" Mar 18 09:13:37.625808 master-0 kubenswrapper[26053]: I0318 09:13:37.625743 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4r6jd_995ec82c-b593-416a-9287-6020a484855c/registry-server/0.log" Mar 18 09:13:37.636245 master-0 kubenswrapper[26053]: I0318 09:13:37.636155 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4r6jd_995ec82c-b593-416a-9287-6020a484855c/extract-utilities/0.log" Mar 18 09:13:37.647082 master-0 kubenswrapper[26053]: I0318 09:13:37.647033 26053 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4r6jd_995ec82c-b593-416a-9287-6020a484855c/extract-content/0.log"